MCP User Guide
Testworthy's MCP server lets AI assistants like VS Code Copilot, Cursor, and Windsurf interact directly with your Testworthy account — creating projects, generating test cases, running tests, and reporting results, all from inside your IDE.
The Testworthy MCP server is currently in beta. Public access is coming soon.
What is MCP?
The Model Context Protocol (MCP) is an open standard that lets AI assistants communicate with external tools and services. By connecting your IDE to the Testworthy MCP server, your AI assistant can:
- Create and manage test projects from your codebase
- Browse, filter, and inspect test cases
- Execute test scripts locally and report results back
- Generate targeted tests for a specific feature
- Use AI to heal broken or flaky test cases
- Generate and save automation scripts for test cases
Installation
No installation required. Once launched, the Testworthy MCP server will be hosted and ready to use — just add a single URL to your IDE's MCP config and you're done. The server URL will be provided here upon launch.
Authentication
Authentication uses a browser-based OAuth flow — no API keys or tokens to copy and paste.
How it works:
- On first use, your IDE detects the MCP server requires authentication
- Your browser opens automatically to the Testworthy login page
- You log in with your existing Testworthy credentials and click Authorize MCP Access
- Your IDE receives a JWT token automatically via OAuth redirect
- All subsequent tool calls use this token — your IDE handles renewal transparently
Note: You only need to authenticate once per IDE. The token persists across sessions.
How the Assistant Resolves Names
You never need to look up or type numeric IDs. Just refer to things by name:
- Projects — say the project name: "the E-commerce Suite project"
- Test cases — use the human-readable number: "TC-42" or describe it by title
- Test runs — refer to them by name or say "the latest run"
The assistant calls list_projects, list_test_cases, or list_project_runs automatically to resolve names to IDs before acting on them.
Available Tools
Connection & Diagnostics
test_connection
Test connectivity to the Testworthy backend and check your authentication status. Run this first if something seems wrong.
Parameters: None
Returns: Backend reachability, token validity, MCP endpoint status, and configuration info.
"Check if my Testworthy connection is working"
Project Management
list_projects
List all Testworthy projects accessible to the authenticated user, including test suite counts and metadata.
Parameters: None
"Show me all my projects"
create_project_from_current_codebase
Most popular tool. Automatically scans your current workspace, packages it into a ZIP, sends it to Testworthy for AI analysis, and generates a comprehensive test suite — all in one step.
| Parameter | Required | Default | Description |
|---|---|---|---|
name | Yes | — | Project name |
description | No | Auto-generated | Project description |
ai_provider | No | "vertex" | AI provider to use |
ai_model | No | "gemini-3.1-pro-preview" | AI model name |
minimum_test_cases | No | 15 | Minimum number of test cases to generate |
workspace_path | No | Current workspace | Absolute path to the codebase to analyse |
What happens automatically:
- Detects workspace files
- Creates a ZIP package
- Estimates token usage and checks quota
- Sends codebase to AI for analysis (chunked if large)
- Generates test suites and test cases
- Saves everything to Testworthy
"Create a project from my current codebase called My App Tests"
"Create a project focused on payment processing and authentication"
create_tests_for_specific_feature
Generate targeted test cases for a specific feature or code change and add them directly to an existing project — without creating a new project.
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name you provide |
feature_description | Yes | — | Description of the feature to test |
code_files | No | [] | List of file paths to include in the analysis |
test_types | No | ["unit", "integration"] | Types of tests to generate |
minimum_test_cases | No | 8 | Minimum test cases to generate |
suite_name | No | First existing suite | Name of the suite to add test cases to |
ai_provider | No | "vertex" | AI provider |
ai_model | No | "gemini-3.1-pro-preview" | AI model name |
"Generate tests for the new OAuth2 login feature in the 'E-commerce Suite' project"
"Add security tests for the user search API to the 'Backend API' project"
Test Case Management
list_test_suites
Get all test suites for a given project.
| Parameter | Required | Description |
|---|---|---|
project_id | Yes | Resolved automatically from the project name |
"List all test suites in the 'Mobile App' project"
list_test_cases
List test cases in a project with rich filtering. Supports filtering by suite, milestone, priority, type, automation status, and keyword search.
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name |
suite_id | No | All suites | Filter by a specific suite name |
milestone_id | No | All milestones | Filter by milestone |
priority | No | All | "low", "medium", "high", "critical" |
type | No | All | "functional", "regression", "smoke", "performance", "security", "usability", "acceptance", "integration", "unit" |
assigned_to_id | No | All | Filter by assigned user ID |
has_automation_script | No | All | true for automated only, false for manual only |
search | No | — | Search keyword in title and description |
skip | No | 0 | Pagination offset |
limit | No | 100 | Max results (up to 500) |
"Show me all critical test cases in the 'E-commerce Suite' project"
"List automated test cases for the login suite"
"Find test cases related to payment"
get_test_case_details
Fetch the full details of a specific test case, including steps, preconditions, and expected results.
| Parameter | Required | Description |
|---|---|---|
test_case_id | Yes | Resolved automatically from the test case number or title |
"Show me the details for TC-42"
get_test_case_by_number
Find a test case using its human-readable number (e.g. TC-12) within a project.
| Parameter | Required | Description |
|---|---|---|
project_id | Yes | Resolved automatically from the project name |
test_case_number | Yes | The test case number (e.g. "TC-12") |
"Find test case TC-42 in the 'Mobile App' project"
heal_test_case
Use AI to fix, improve, or update a test case. This is the same AI healing feature available in the Testworthy UI.
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name |
test_case_id | Yes | — | Resolved automatically from the test case number or title |
healing_request | Yes | — | Description of what to fix or improve |
ai_provider | No | "vertex" | AI provider |
ai_model | No | "gemini-2.5-flash" | AI model |
max_tokens | No | 16000 | Max tokens for AI response |
"Fix TC-45 in the 'E-commerce Suite' project — the login flow changed to use OAuth2"
"Add edge cases for empty inputs to TC-89"
"Make TC-101 less flaky by adding proper waits"
save_healed_test_case
Persist the output of heal_test_case back to Testworthy. Always show the healed result to the user and ask for confirmation before calling this tool.
| Parameter | Required | Description |
|---|---|---|
project_id | Yes | Resolved automatically from the project name |
test_case_id | Yes | Resolved automatically — carried over from the heal_test_case result |
healed_data | Yes | The healed test case data (output from heal_test_case) |
updated_aggregate | No | Updated project aggregate (optional) |
"Save the healed version of TC-45"
Automation Scripts
generate_automation_script
Use AI to generate a runnable automation script for a test case (e.g. Playwright, Cypress, Selenium).
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name |
test_case_id | Yes | — | Resolved automatically from the test case number or title |
framework | No | "playwright" | Target framework: "playwright", "selenium", "cypress" |
language | No | Auto-detected | Programming language for the script |
additional_instructions | No | "" | Extra instructions for the AI |
ai_provider | No | "vertex" | AI provider |
ai_model | No | "gemini-2.5-flash" | AI model |
max_tokens | No | 16000 | Maximum tokens for AI response |
Returns: The generated script code and suggested filename.
"Generate a Playwright automation script for TC-89 in the 'E-commerce Suite' project"
"Create a Selenium Python script for the login test case with explicit waits"
save_automation_script
Save a generated automation script back to the test case in Testworthy. Always show the script to the user first and ask for confirmation.
| Parameter | Required | Description |
|---|---|---|
test_case_id | Yes | Resolved automatically — carried over from the previous step |
script | Yes | The automation script code to save |
framework | Yes | Framework the script is written for |
"Save this automation script to TC-89"
get_test_script_for_execution
Retrieve the saved automation script for a test case, ready for local execution.
| Parameter | Required | Default | Description |
|---|---|---|---|
test_case_id | Yes | — | Resolved automatically from the test case number or title |
framework | No | "pytest" | Framework: "playwright", "cypress", "jest", "pytest", "unittest", "selenium" |
"Get the automation script for TC-23"
"Get the Playwright script for TC-45"
Test Execution
check_test_environment
Verify that your local environment has everything needed to run tests for a given framework. Run this before execute_test_case_directly to avoid silent failures.
| Parameter | Required | Default | Description |
|---|---|---|---|
framework | No | "playwright" | Framework to check: "playwright", "cypress", "jest", "pytest", "unittest", "selenium" |
working_directory | No | "." | Directory where tests will be executed |
Returns: is_ready, missing_requirements, and setup_commands to fix any issues.
"Check if my environment is ready for Playwright tests"
"Is pytest set up correctly in the ./tests directory?"
execute_test_case_directly
Start executing an automation script for a test case in the background and immediately return a job ID to poll. Does not block — use get_test_execution_status to track progress.
Important: After execution completes, always show the results to the user and ask before reporting back to Testworthy. Never auto-report.
| Parameter | Required | Default | Description |
|---|---|---|---|
test_case_id | Yes | — | Resolved automatically from the test case number or title |
framework | No | "pytest" | Framework: "playwright", "cypress", "jest", "pytest", "unittest", "selenium" |
working_directory | No | "." | Directory to execute the test in |
timeout | No | 300 | Seconds before the process is killed |
Returns: job_id to use with get_test_execution_status.
"Run TC-23 using Playwright"
"Execute the automation script for the checkout test in pytest"
get_test_execution_status
Poll the status of a background test execution started by execute_test_case_directly.
| Parameter | Required | Description |
|---|---|---|
job_id | Yes | Job ID returned automatically by execute_test_case_directly |
Status values: "queued" → "running" → "passed" / "failed" / "error"
"Is my test done running?"
"Check the status of the last execution"
register_client_execution
Create a new Test Run in Testworthy and register test cases for execution.
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name |
test_case_ids | Yes | — | Resolved automatically from the test case numbers or titles |
run_name | No | "MCP Execution" | Name for the test run |
run_description | No | — | Description for the test run |
"Create a new test run for TC-101, TC-102, and TC-103 in the 'Mobile App' project"
update_execution_progress
Update the status of an individual test execution within a run.
| Parameter | Required | Description |
|---|---|---|
execution_id | Yes | Carried over automatically from register_client_execution |
status | Yes | New status: "running", "passed", "failed", "skipped" |
notes | No | Notes about the execution |
screenshot_data | No | Base64-encoded screenshot to attach |
complete_client_execution
Mark a test run as fully completed with a final status and optional summary.
| Parameter | Required | Description |
|---|---|---|
run_id | Yes | Carried over automatically from register_client_execution |
final_status | Yes | Final overall status of the run |
summary_notes | No | Summary notes for the entire run |
report_test_results_to_testworthy
Report completed local test execution results back to Testworthy. Creates a Test Run, attaches results and screenshots, and closes the run.
Only call this after the user explicitly confirms they want to report results.
| Parameter | Required | Default | Description |
|---|---|---|---|
job_id | Yes | — | Carried over automatically from execute_test_case_directly |
project_id | Yes | — | Resolved automatically from the project name |
run_name | No | "MCP Client Execution" | Name for the test run |
run_description | No | — | Description for the run |
screenshot_path | No | Auto-discovered | Path to a screenshot file to attach |
"Yes, report the results back to the 'Mobile App' project"
Test Runs & History
list_project_runs
List all test runs in a project, with optional status filtering.
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name |
skip | No | 0 | Pagination offset |
limit | No | 50 | Max runs to return |
status_filter | No | All | Filter by status: "draft", "in-progress", "completed", "archived" |
"Show all test runs for the 'E-commerce Suite' project"
"List completed runs for the 'Backend API' project"
get_run_test_cases
Get all test cases that belong to an existing test run, along with their current execution status. Useful for re-running specific cases (e.g. only failed ones).
| Parameter | Required | Description |
|---|---|---|
run_id | Yes | Resolved automatically from the run name or "latest run" |
project_id | Yes | Resolved automatically from the project name |
"Show me all failed test cases in the latest run for 'E-commerce Suite'"
"What tests are in the 'Regression Sprint 4' run?"
get_test_executions
Get execution history for a project.
| Parameter | Required | Default | Description |
|---|---|---|---|
project_id | Yes | — | Resolved automatically from the project name |
limit | No | 50 | Max executions to return |
"Show me the last 20 test executions for the 'Mobile App' project"
get_test_configuration
Get test configuration and environment settings for a project.
| Parameter | Required | Description |
|---|---|---|
project_id | Yes | Resolved automatically from the project name |
"What's the test configuration for the 'Backend API' project?"
Example Workflows
Create a project and generate tests from your codebase
"Create a Testworthy project from my current codebase called E-commerce Suite, focused on checkout and authentication"
The AI assistant will call create_project_from_current_codebase and automatically handle everything: scanning files, packaging, AI analysis, and saving the test suite.
Find and run a failing test
1. "List all failed test cases in the 'E-commerce Suite' project" → list_test_cases
2. "Run TC-89 using Playwright" → execute_test_case_directly
3. "Is the test done?" → get_test_execution_status
4. "Yes, report the results back to Testworthy" → report_test_results_to_testworthy
Generate and save an automation script
1. "Generate a Playwright script for TC-45" → generate_automation_script
2. "Looks good, save it" → save_automation_script
3. "Now run it" → execute_test_case_directly
Re-run all failed tests from a run
1. "Show me test runs for the 'E-commerce Suite' project" → list_project_runs
2. "Get the failed test cases from the latest run" → get_run_test_cases
3. "Re-run the failed ones" → execute_test_case_directly (for each)
4. "Report results back" → report_test_results_to_testworthy
Add tests for a new feature
"Add 10 security and integration tests for the new OAuth2 login feature
to the 'Backend API' project, using auth/oauth_handler.py and auth/social_providers.py"
The AI assistant will call create_tests_for_specific_feature and add the test cases directly to the existing project.
Fix a broken test with AI
"Heal TC-45 in the 'E-commerce Suite' project — the login flow changed and the steps need updating"
The AI assistant will call heal_test_case, show you the result, and then call save_healed_test_case once you confirm.
IDE Setup Guides
VS Code (Copilot)
Add to your VS Code settings.json:
{
"mcp": {
"servers": {
"testworthy": {
"type": "http",
"url": "<URL will be provided at launch>"
}
}
}
}
Cursor
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"testworthy": {
"type": "http",
"url": "<URL will be provided at launch>"
}
}
}
Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"testworthy": {
"type": "http",
"url": "<URL will be provided at launch>"
}
}
}
Note: All IDEs will use the same URL. On first connection, your browser will open automatically for a one-time login. No API keys, no tokens, no local setup required.
All Tools — Quick Reference
| Tool | Category | Description |
|---|---|---|
test_connection | Diagnostics | Check backend connectivity and auth status |
list_projects | Projects | List all accessible projects |
create_project_from_current_codebase | Projects | Auto-generate a full test suite from your codebase |
create_tests_for_specific_feature | Projects | Add targeted tests for a feature to an existing project |
list_test_suites | Test Cases | List test suites in a project |
list_test_cases | Test Cases | List and filter test cases in a project |
get_test_case_details | Test Cases | Get full details for a specific test case |
get_test_case_by_number | Test Cases | Find a test case by its TC number |
heal_test_case | Test Cases | Use AI to fix or improve a test case |
save_healed_test_case | Test Cases | Save the healed test case back to Testworthy |
generate_automation_script | Automation | Generate a runnable automation script with AI |
save_automation_script | Automation | Save a generated script to a test case |
get_test_script_for_execution | Automation | Retrieve a saved automation script |
check_test_environment | Execution | Verify local environment is ready for a framework |
execute_test_case_directly | Execution | Run a test script locally in the background |
get_test_execution_status | Execution | Poll the status of a running test job |
register_client_execution | Execution | Create a Test Run in Testworthy |
update_execution_progress | Execution | Update a test execution's status |
complete_client_execution | Execution | Mark a test run as completed |
report_test_results_to_testworthy | Execution | Report local results back to Testworthy |
list_project_runs | History | List test runs in a project |
get_run_test_cases | History | Get test cases and statuses from a specific run |
get_test_executions | History | Get execution history for a project |
get_test_configuration | Configuration | Get test config and environment settings |