43

✍️ AI Prompts for Playwright Testers: Getting the Most Out of Your AI Assistants

Unlock the full power of AI assistants for Playwright testing with practical prompt engineering tips, real-world examples, and strategies to boost your productivity.

Introduction

AI assistants are rapidly becoming indispensable tools in a developer's arsenal. For those of us working with Playwright for end-to-end and browser automation testing, AI can be a powerful partner, helping to accelerate test creation, debug issues, and even generate test ideas. However, the quality of an AI's output is directly proportional to the quality of the input it receives. The old adage "garbage in, garbage out" holds particularly true for AI prompts.

This post will guide you through crafting effective prompts specifically tailored for Playwright testing scenarios. By learning to "speak the AI's language" more effectively, you can unlock significant productivity gains and make your AI assistant a true co-pilot in your testing journey.

Why Prompt Engineering Matters for Testers

You might wonder, "Why do I need to learn prompt engineering? Can't I just ask the AI what I want?" While simple questions can sometimes yield useful results, a more structured approach to prompting offers several advantages for Playwright testers:

  • Saves Time and Reduces Boilerplate: Well-crafted prompts can generate significant portions of test code, page object models, or fixture setups, freeing you to focus on more complex logic.
  • Aids in Learning and Exploration: Asking an AI to explain a Playwright feature or demonstrate its usage can be a quick way to learn.
  • Assists in Debugging: By providing context (like error messages and code snippets), you can get targeted suggestions for troubleshooting.
  • Sparks Test Ideas: AI can help brainstorm edge cases and negative scenarios you might not have considered.

The core idea is to provide enough precision and context so the AI doesn't have to guess, leading to more accurate and immediately usable outputs.

Core Principles of Effective AI Prompts for Playwright

Regardless of the specific AI assistant you're using, these general principles will help you get better results:

  1. Be Specific: Clearly state what you need. Instead of "write a test," try "write a Playwright TypeScript test for a login page."
  2. Provide Context: This is crucial.
    • Playwright Version (if relevant): Mention if you're on a specific version, especially if asking about new or changed features.
    • Language: Specify TypeScript (TS), JavaScript (JS), Python, Java, or C#.
    • APIs: If you're working with specific Playwright APIs (e.g., page.locator(), request.post(), test.extend()), mention them.
    • Existing Code: For refactoring or debugging, always provide the relevant code snippet.
  3. Define the Persona/Role (Optional but often helpful): You can guide the AI's tone and focus by telling it to "Act as a senior Playwright test engineer" or "Explain this to me like I'm new to Playwright."
  4. Specify the Output Format: Do you want a code snippet, a bulleted list, a step-by-step explanation, or a markdown table? Ask for it!
  5. Iterate: Don't expect the perfect answer on your first try. If the output isn't quite right, refine your prompt with more details or clarifications and try again.

Practical Prompt Examples for Playwright Testers

Let's dive into some concrete examples of prompts you can use for common Playwright tasks. I'll primarily use TypeScript for these examples, as it's a popular choice with Playwright, but the principles apply to other languages.

1. Generating a Basic Test Stub

Prompt:

Generate a Playwright TypeScript test stub for a login page.
The page is at '/login'.
It has an email input with the selector '#email', a password input with the selector '#password', and a login button with the text 'Log In'.
After successful login, the user should be redirected to '/dashboard' and a welcome message 'Welcome, User!' (selector: '.welcome-banner') should be visible.
Include necessary imports and a basic test structure.

Why it's effective: It specifies the language, page elements (with selectors or text), expected actions, and assertions. Expected AI Output (Conceptual):

import { test, expect } from '@playwright/test';
 
test.describe('Login Functionality', () => {
  test('should allow a user to log in successfully', async ({ page }) => {
    await page.goto('/login');
 
    // Fill in credentials
    await page.locator('#email').fill('testuser@example.com');
    await page.locator('#password').fill('password123');
 
    // Click login button
    await page.getByRole('button', { name: 'Log In' }).click();
 
    // Assertions
    await expect(page).toHaveURL('/dashboard');
    await expect(page.locator('.welcome-banner')).toHaveText('Welcome, User!');
  });
});

2. Creating Page Object Model (POM) Snippets

Prompt:

Create a Playwright TypeScript Page Object Model (POM) class named 'SearchResultsPage'.
The page URL is typically '/search-results'.
It should include:
1. A constructor that accepts the Playwright 'Page' object.
2. A method 'getAllResultTitles' that returns an array of strings of all search result titles. The selector for a single result title is 'h2.result-card__title'.
3. A method 'clickFirstResult' that clicks on the first search result link. Assume the link is within an article tag: 'article.result-card a'.
4. A method 'navigateTo' to go to the search results page.

Why it's effective: Clearly defines the class name, methods, their purpose, and relevant selectors. Expected AI Output (Conceptual):

import { type Page, type Locator } from '@playwright/test';
 
export class SearchResultsPage {
  readonly page: Page;
  readonly resultTitles: Locator;
  readonly firstResultLink: Locator;
 
  constructor(page: Page) {
    this.page = page;
    this.resultTitles = page.locator('h2.result-card__title');
    this.firstResultLink = page.locator('article.result-card a').first();
  }
 
  async navigateTo() {
    await this.page.goto('/search-results');
  }
 
  async getAllResultTitles(): Promise<string[]> {
    return this.resultTitles.allTextContents();
  }
 
  async clickFirstResult(): Promise<void> {
    await this.firstResultLink.click();
  }
}

3. Debugging an Error

Prompt:

I'm encountering a 'TimeoutError: waiting for selector "#checkout-button" to be visible' in my Playwright TypeScript test.
Here's the relevant part of my test:
'''typescript
// ... other steps ...
await page.locator('#product-details').waitFor({ state: 'visible' });
console.log('Product details visible');
await page.locator('#checkout-button').click(); // This line throws the error
'''
What are common reasons for this specific error with '#checkout-button' and how can I debug it effectively in Playwright? Suggest specific Playwright APIs or techniques.

Why it's effective: Provides the exact error message, the problematic code snippet, and asks for common reasons and debugging techniques. Expected AI Output (Conceptual):

  • List of potential causes (element not present, not visible, covered, timing issues, dynamic rendering).
  • Debugging suggestions:
    • "Use await page.pause() before the failing line to inspect the page state in the browser."
    • "Take a screenshot on failure: test.afterEach(async ({ page }, testInfo) => { if (testInfo.status === 'failed') { await page.screenshot({ path: test-results/failure-${testInfo.title.replace(/\s+/g, '-')}.png }); } });"
    • "Verify the selector is correct using page.locator('#checkout-button').highlight() or DevTools."
    • "Increase timeout for waitFor or the click action if it's a slow-loading element: await page.locator('#checkout-button').click({ timeout: 10000 });"
    • "Check if an iframe is involved."

4. Understanding a Playwright Feature

Prompt:

Explain Playwright's concept of "web-first assertions" (e.g., `expect(locator).toBeVisible()`).
How do they differ from manual, explicit waits?
Provide a simple Playwright TypeScript code example demonstrating a web-first assertion and why it's beneficial for test stability.

Why it's effective: Asks for an explanation, a comparison, and a practical code example. Expected AI Output (Conceptual):

  • Explanation of how Playwright's assertions automatically wait for conditions to be met.
  • Comparison to manual waitForSelector followed by a separate assertion.
  • Code example:
    // ... setup ...
    await page.getByRole('button', { name: 'Submit' }).click();
    // Web-first assertion: Playwright automatically waits for the element to appear and have the text.
    await expect(page.locator('.status-message')).toHaveText('Form submitted successfully!');
  • Benefit: More concise and robust tests, less prone to flakiness due to timing.

5. Converting a Test to Use test.extend for Fixtures

Prompt:

I have a Playwright TypeScript test suite where many tests require a user to be logged in.
Currently, I repeat the login steps in a `test.beforeEach`.
Here's my current login logic:
'''typescript
test.beforeEach(async ({ page }) => {
  await page.goto('/login');
  await page.locator('#username').fill('testuser');
  await page.locator('#password').fill('complexpassword');
  await page.locator('button[type="submit"]').click();
  await expect(page).toHaveURL('/dashboard');
});
'''
Show me how to refactor this using `test.extend` to create a custom fixture named `loggedInPage`.
This fixture should provide an already logged-in `page` object to the tests.

Why it's effective: Provides the current code, clearly states the desired outcome (a loggedInPage fixture), and mentions the specific Playwright API (test.extend). Expected AI Output (Conceptual):

import { test as base, expect, type Page } from '@playwright/test';
 
type MyFixtures = {
  loggedInPage: Page;
};
 
export const test = base.extend<MyFixtures>({
  loggedInPage: async ({ page }, use) => {
    // Perform login steps
    await page.goto('/login');
    await page.locator('#username').fill('testuser');
    await page.locator('#password').fill('complexpassword');
    await page.locator('button[type="submit"]').click();
    await expect(page).toHaveURL('/dashboard');
    
    // Use the logged-in page in the test
    await use(page);
    
    // Add any cleanup steps here if needed after the test, e.g., logout
    // console.log('Logging out after test...');
  },
});
 
export { expect }; // Re-export expect
 
// Example usage:
// import { test, expect } from './my-fixtures'; // Assuming you save the above as my-fixtures.ts
// test('user can access profile page', async ({ loggedInPage }) => {
//   await loggedInPage.goto('/profile');
//   await expect(loggedInPage.locator('.profile-username')).toHaveText('testuser');
// });

6. Generating Test Ideas/Edge Cases

Prompt:

I'm responsible for testing a new file upload feature in a web application using Playwright.
The feature allows users to upload JPG, PNG, and PDF files, with a maximum file size of 10MB.
What are some important positive, negative, and edge case scenarios I should consider for my Playwright tests?
Categorize them if possible.

Why it's effective: Describes the feature, its constraints (file types, size limit), and asks for categorized test scenarios. Expected AI Output (Conceptual):

  • Positive Scenarios:
    • Upload valid JPG within size limit.
    • Upload valid PNG within size limit.
    • Upload valid PDF within size limit.
    • Upload file at exactly 10MB.
    • Upload smallest possible valid file.
  • Negative Scenarios (File Type):
    • Attempt to upload a .txt file.
    • Attempt to upload a .gif file.
    • Attempt to upload a file with no extension.
    • Attempt to upload a file with a fake extension (e.g., a .txt file renamed to .jpg).
  • Negative Scenarios (File Size):
    • Attempt to upload a file larger than 10MB (e.g., 10.1MB).
    • Attempt to upload a 0KB file (if not allowed).
  • Edge Cases:
    • Upload a file with special characters in its name.
    • Cancel an upload mid-progress.
    • Attempt to upload while offline.
    • Try uploading multiple files quickly if the interface allows (testing for race conditions).
    • Interface behavior with very long file names.

7. Writing API Test Snippets

Prompt:

Write a Playwright TypeScript snippet for an API test.
The test needs to:
1. Make a POST request to the endpoint '/api/v1/items'.
2. Send a JSON payload: `{ "name": "New Item", "quantity": 10, "price": 19.99 }`.
3. Include an 'Authorization' header with the value 'Bearer mySecretToken'.
4. Assert that the HTTP response status code is 201 (Created).
5. Assert that the response body contains a JSON object with an 'id' property (which can be any string or number).
Use Playwright's built-in APIRequestContext.

Why it's effective: Details the HTTP method, endpoint, payload, headers, and specific assertions for both status and body. Mentions the relevant Playwright feature (APIRequestContext). Expected AI Output (Conceptual):

import { test, expect } from '@playwright/test';
 
test('should create a new item via API', async ({ request }) => {
  const newItemPayload = {
    name: "New Item",
    quantity: 10,
    price: 19.99
  };
 
  const response = await request.post('/api/v1/items', {
    data: newItemPayload,
    headers: {
      'Authorization': 'Bearer mySecretToken',
      'Content-Type': 'application/json' // Good practice to include
    }
  });
 
  // Assert status code
  expect(response.status()).toBe(201);
 
  // Assert response body contains an 'id'
  const responseBody = await response.json();
  expect(responseBody).toHaveProperty('id');
  // Optionally, be more specific about the type if known
  // expect(typeof responseBody.id).toBe('string'); 
  // or expect(typeof responseBody.id).toBe('number');
  console.log('Created item ID:', responseBody.id);
});

Tips for Better AI-Assisted Playwright Testing

  • Don't Trust Blindly: Always critically review and, most importantly, test the code generated by an AI. It's a great assistant, not an infallible oracle.
  • Use AI as a Starting Point: AI is excellent for generating boilerplate or initial structures. You'll often need to tweak, refine, and integrate it into your existing codebase.
  • Keep a "Prompt Library": When you find a prompt that works exceptionally well for a specific task, save it! Over time, you'll build a valuable personal library.
  • Experiment with Different AI Models/Assistants: Some AI models are better tuned for coding tasks or have more up-to-date knowledge about libraries like Playwright.
  • Provide Feedback (if the AI tool allows): Many AI tools have a thumbs-up/down mechanism. Use it to help the AI learn and improve.

Conclusion

AI assistants, when guided by well-crafted and contextual prompts, can be a massive productivity booster for Playwright testers. From scaffolding tests and POMs to debugging tricky issues and generating diverse test data, the possibilities are vast.

Mastering the art of prompt engineering is an investment that will pay dividends in saved time, reduced frustration, and even enhanced learning. So, start experimenting with the examples above, adapt them to your specific needs, and watch your Playwright testing workflow become more efficient and enjoyable.