Skip to main content

MCP User Guide

Testworthy's MCP server lets AI assistants like VS Code Copilot, Cursor, and Windsurf interact directly with your Testworthy account — creating projects, generating test cases, running tests, and reporting results, all from inside your IDE.

Coming Soon

The Testworthy MCP server is currently in beta. Public access is coming soon.


What is MCP?

The Model Context Protocol (MCP) is an open standard that lets AI assistants communicate with external tools and services. By connecting your IDE to the Testworthy MCP server, your AI assistant can:

  • Create and manage test projects from your codebase
  • Browse, filter, and inspect test cases
  • Execute test scripts locally and report results back
  • Generate targeted tests for a specific feature
  • Use AI to heal broken or flaky test cases
  • Generate and save automation scripts for test cases

Installation

No installation required. Once launched, the Testworthy MCP server will be hosted and ready to use — just add a single URL to your IDE's MCP config and you're done. The server URL will be provided here upon launch.


Authentication

Authentication uses a browser-based OAuth flow — no API keys or tokens to copy and paste.

How it works:

  1. On first use, your IDE detects the MCP server requires authentication
  2. Your browser opens automatically to the Testworthy login page
  3. You log in with your existing Testworthy credentials and click Authorize MCP Access
  4. Your IDE receives a JWT token automatically via OAuth redirect
  5. All subsequent tool calls use this token — your IDE handles renewal transparently

Note: You only need to authenticate once per IDE. The token persists across sessions.


How the Assistant Resolves Names

You never need to look up or type numeric IDs. Just refer to things by name:

  • Projects — say the project name: "the E-commerce Suite project"
  • Test cases — use the human-readable number: "TC-42" or describe it by title
  • Test runs — refer to them by name or say "the latest run"

The assistant calls list_projects, list_test_cases, or list_project_runs automatically to resolve names to IDs before acting on them.


Available Tools

Connection & Diagnostics


test_connection

Test connectivity to the Testworthy backend and check your authentication status. Run this first if something seems wrong.

Parameters: None

Returns: Backend reachability, token validity, MCP endpoint status, and configuration info.

"Check if my Testworthy connection is working"

Project Management


list_projects

List all Testworthy projects accessible to the authenticated user, including test suite counts and metadata.

Parameters: None

"Show me all my projects"

create_project_from_current_codebase

Most popular tool. Automatically scans your current workspace, packages it into a ZIP, sends it to Testworthy for AI analysis, and generates a comprehensive test suite — all in one step.

ParameterRequiredDefaultDescription
nameYesProject name
descriptionNoAuto-generatedProject description
ai_providerNo"vertex"AI provider to use
ai_modelNo"gemini-3.1-pro-preview"AI model name
minimum_test_casesNo15Minimum number of test cases to generate
workspace_pathNoCurrent workspaceAbsolute path to the codebase to analyse

What happens automatically:

  1. Detects workspace files
  2. Creates a ZIP package
  3. Estimates token usage and checks quota
  4. Sends codebase to AI for analysis (chunked if large)
  5. Generates test suites and test cases
  6. Saves everything to Testworthy
"Create a project from my current codebase called My App Tests"
"Create a project focused on payment processing and authentication"

create_tests_for_specific_feature

Generate targeted test cases for a specific feature or code change and add them directly to an existing project — without creating a new project.

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name you provide
feature_descriptionYesDescription of the feature to test
code_filesNo[]List of file paths to include in the analysis
test_typesNo["unit", "integration"]Types of tests to generate
minimum_test_casesNo8Minimum test cases to generate
suite_nameNoFirst existing suiteName of the suite to add test cases to
ai_providerNo"vertex"AI provider
ai_modelNo"gemini-3.1-pro-preview"AI model name
"Generate tests for the new OAuth2 login feature in the 'E-commerce Suite' project"
"Add security tests for the user search API to the 'Backend API' project"

Test Case Management


list_test_suites

Get all test suites for a given project.

ParameterRequiredDescription
project_idYesResolved automatically from the project name
"List all test suites in the 'Mobile App' project"

list_test_cases

List test cases in a project with rich filtering. Supports filtering by suite, milestone, priority, type, automation status, and keyword search.

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name
suite_idNoAll suitesFilter by a specific suite name
milestone_idNoAll milestonesFilter by milestone
priorityNoAll"low", "medium", "high", "critical"
typeNoAll"functional", "regression", "smoke", "performance", "security", "usability", "acceptance", "integration", "unit"
assigned_to_idNoAllFilter by assigned user ID
has_automation_scriptNoAlltrue for automated only, false for manual only
searchNoSearch keyword in title and description
skipNo0Pagination offset
limitNo100Max results (up to 500)
"Show me all critical test cases in the 'E-commerce Suite' project"
"List automated test cases for the login suite"
"Find test cases related to payment"

get_test_case_details

Fetch the full details of a specific test case, including steps, preconditions, and expected results.

ParameterRequiredDescription
test_case_idYesResolved automatically from the test case number or title
"Show me the details for TC-42"

get_test_case_by_number

Find a test case using its human-readable number (e.g. TC-12) within a project.

ParameterRequiredDescription
project_idYesResolved automatically from the project name
test_case_numberYesThe test case number (e.g. "TC-12")
"Find test case TC-42 in the 'Mobile App' project"

heal_test_case

Use AI to fix, improve, or update a test case. This is the same AI healing feature available in the Testworthy UI.

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name
test_case_idYesResolved automatically from the test case number or title
healing_requestYesDescription of what to fix or improve
ai_providerNo"vertex"AI provider
ai_modelNo"gemini-2.5-flash"AI model
max_tokensNo16000Max tokens for AI response
"Fix TC-45 in the 'E-commerce Suite' project — the login flow changed to use OAuth2"
"Add edge cases for empty inputs to TC-89"
"Make TC-101 less flaky by adding proper waits"

save_healed_test_case

Persist the output of heal_test_case back to Testworthy. Always show the healed result to the user and ask for confirmation before calling this tool.

ParameterRequiredDescription
project_idYesResolved automatically from the project name
test_case_idYesResolved automatically — carried over from the heal_test_case result
healed_dataYesThe healed test case data (output from heal_test_case)
updated_aggregateNoUpdated project aggregate (optional)
"Save the healed version of TC-45"

Automation Scripts


generate_automation_script

Use AI to generate a runnable automation script for a test case (e.g. Playwright, Cypress, Selenium).

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name
test_case_idYesResolved automatically from the test case number or title
frameworkNo"playwright"Target framework: "playwright", "selenium", "cypress"
languageNoAuto-detectedProgramming language for the script
additional_instructionsNo""Extra instructions for the AI
ai_providerNo"vertex"AI provider
ai_modelNo"gemini-2.5-flash"AI model
max_tokensNo16000Maximum tokens for AI response

Returns: The generated script code and suggested filename.

"Generate a Playwright automation script for TC-89 in the 'E-commerce Suite' project"
"Create a Selenium Python script for the login test case with explicit waits"

save_automation_script

Save a generated automation script back to the test case in Testworthy. Always show the script to the user first and ask for confirmation.

ParameterRequiredDescription
test_case_idYesResolved automatically — carried over from the previous step
scriptYesThe automation script code to save
frameworkYesFramework the script is written for
"Save this automation script to TC-89"

get_test_script_for_execution

Retrieve the saved automation script for a test case, ready for local execution.

ParameterRequiredDefaultDescription
test_case_idYesResolved automatically from the test case number or title
frameworkNo"pytest"Framework: "playwright", "cypress", "jest", "pytest", "unittest", "selenium"
"Get the automation script for TC-23"
"Get the Playwright script for TC-45"

Test Execution


check_test_environment

Verify that your local environment has everything needed to run tests for a given framework. Run this before execute_test_case_directly to avoid silent failures.

ParameterRequiredDefaultDescription
frameworkNo"playwright"Framework to check: "playwright", "cypress", "jest", "pytest", "unittest", "selenium"
working_directoryNo"."Directory where tests will be executed

Returns: is_ready, missing_requirements, and setup_commands to fix any issues.

"Check if my environment is ready for Playwright tests"
"Is pytest set up correctly in the ./tests directory?"

execute_test_case_directly

Start executing an automation script for a test case in the background and immediately return a job ID to poll. Does not block — use get_test_execution_status to track progress.

Important: After execution completes, always show the results to the user and ask before reporting back to Testworthy. Never auto-report.

ParameterRequiredDefaultDescription
test_case_idYesResolved automatically from the test case number or title
frameworkNo"pytest"Framework: "playwright", "cypress", "jest", "pytest", "unittest", "selenium"
working_directoryNo"."Directory to execute the test in
timeoutNo300Seconds before the process is killed

Returns: job_id to use with get_test_execution_status.

"Run TC-23 using Playwright"
"Execute the automation script for the checkout test in pytest"

get_test_execution_status

Poll the status of a background test execution started by execute_test_case_directly.

ParameterRequiredDescription
job_idYesJob ID returned automatically by execute_test_case_directly

Status values: "queued""running""passed" / "failed" / "error"

"Is my test done running?"
"Check the status of the last execution"

register_client_execution

Create a new Test Run in Testworthy and register test cases for execution.

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name
test_case_idsYesResolved automatically from the test case numbers or titles
run_nameNo"MCP Execution"Name for the test run
run_descriptionNoDescription for the test run
"Create a new test run for TC-101, TC-102, and TC-103 in the 'Mobile App' project"

update_execution_progress

Update the status of an individual test execution within a run.

ParameterRequiredDescription
execution_idYesCarried over automatically from register_client_execution
statusYesNew status: "running", "passed", "failed", "skipped"
notesNoNotes about the execution
screenshot_dataNoBase64-encoded screenshot to attach

complete_client_execution

Mark a test run as fully completed with a final status and optional summary.

ParameterRequiredDescription
run_idYesCarried over automatically from register_client_execution
final_statusYesFinal overall status of the run
summary_notesNoSummary notes for the entire run

report_test_results_to_testworthy

Report completed local test execution results back to Testworthy. Creates a Test Run, attaches results and screenshots, and closes the run.

Only call this after the user explicitly confirms they want to report results.

ParameterRequiredDefaultDescription
job_idYesCarried over automatically from execute_test_case_directly
project_idYesResolved automatically from the project name
run_nameNo"MCP Client Execution"Name for the test run
run_descriptionNoDescription for the run
screenshot_pathNoAuto-discoveredPath to a screenshot file to attach
"Yes, report the results back to the 'Mobile App' project"

Test Runs & History


list_project_runs

List all test runs in a project, with optional status filtering.

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name
skipNo0Pagination offset
limitNo50Max runs to return
status_filterNoAllFilter by status: "draft", "in-progress", "completed", "archived"
"Show all test runs for the 'E-commerce Suite' project"
"List completed runs for the 'Backend API' project"

get_run_test_cases

Get all test cases that belong to an existing test run, along with their current execution status. Useful for re-running specific cases (e.g. only failed ones).

ParameterRequiredDescription
run_idYesResolved automatically from the run name or "latest run"
project_idYesResolved automatically from the project name
"Show me all failed test cases in the latest run for 'E-commerce Suite'"
"What tests are in the 'Regression Sprint 4' run?"

get_test_executions

Get execution history for a project.

ParameterRequiredDefaultDescription
project_idYesResolved automatically from the project name
limitNo50Max executions to return
"Show me the last 20 test executions for the 'Mobile App' project"

get_test_configuration

Get test configuration and environment settings for a project.

ParameterRequiredDescription
project_idYesResolved automatically from the project name
"What's the test configuration for the 'Backend API' project?"

Example Workflows

Create a project and generate tests from your codebase

"Create a Testworthy project from my current codebase called E-commerce Suite, focused on checkout and authentication"

The AI assistant will call create_project_from_current_codebase and automatically handle everything: scanning files, packaging, AI analysis, and saving the test suite.


Find and run a failing test

1. "List all failed test cases in the 'E-commerce Suite' project"   → list_test_cases
2. "Run TC-89 using Playwright" → execute_test_case_directly
3. "Is the test done?" → get_test_execution_status
4. "Yes, report the results back to Testworthy" → report_test_results_to_testworthy

Generate and save an automation script

1. "Generate a Playwright script for TC-45"   → generate_automation_script
2. "Looks good, save it" → save_automation_script
3. "Now run it" → execute_test_case_directly

Re-run all failed tests from a run

1. "Show me test runs for the 'E-commerce Suite' project"    → list_project_runs
2. "Get the failed test cases from the latest run" → get_run_test_cases
3. "Re-run the failed ones" → execute_test_case_directly (for each)
4. "Report results back" → report_test_results_to_testworthy

Add tests for a new feature

"Add 10 security and integration tests for the new OAuth2 login feature
to the 'Backend API' project, using auth/oauth_handler.py and auth/social_providers.py"

The AI assistant will call create_tests_for_specific_feature and add the test cases directly to the existing project.


Fix a broken test with AI

"Heal TC-45 in the 'E-commerce Suite' project — the login flow changed and the steps need updating"

The AI assistant will call heal_test_case, show you the result, and then call save_healed_test_case once you confirm.


IDE Setup Guides

VS Code (Copilot)

Add to your VS Code settings.json:

{
"mcp": {
"servers": {
"testworthy": {
"type": "http",
"url": "<URL will be provided at launch>"
}
}
}
}

Cursor

Add to ~/.cursor/mcp.json:

{
"mcpServers": {
"testworthy": {
"type": "http",
"url": "<URL will be provided at launch>"
}
}
}

Windsurf

Add to ~/.codeium/windsurf/mcp_config.json:

{
"mcpServers": {
"testworthy": {
"type": "http",
"url": "<URL will be provided at launch>"
}
}
}

Note: All IDEs will use the same URL. On first connection, your browser will open automatically for a one-time login. No API keys, no tokens, no local setup required.


All Tools — Quick Reference

ToolCategoryDescription
test_connectionDiagnosticsCheck backend connectivity and auth status
list_projectsProjectsList all accessible projects
create_project_from_current_codebaseProjectsAuto-generate a full test suite from your codebase
create_tests_for_specific_featureProjectsAdd targeted tests for a feature to an existing project
list_test_suitesTest CasesList test suites in a project
list_test_casesTest CasesList and filter test cases in a project
get_test_case_detailsTest CasesGet full details for a specific test case
get_test_case_by_numberTest CasesFind a test case by its TC number
heal_test_caseTest CasesUse AI to fix or improve a test case
save_healed_test_caseTest CasesSave the healed test case back to Testworthy
generate_automation_scriptAutomationGenerate a runnable automation script with AI
save_automation_scriptAutomationSave a generated script to a test case
get_test_script_for_executionAutomationRetrieve a saved automation script
check_test_environmentExecutionVerify local environment is ready for a framework
execute_test_case_directlyExecutionRun a test script locally in the background
get_test_execution_statusExecutionPoll the status of a running test job
register_client_executionExecutionCreate a Test Run in Testworthy
update_execution_progressExecutionUpdate a test execution's status
complete_client_executionExecutionMark a test run as completed
report_test_results_to_testworthyExecutionReport local results back to Testworthy
list_project_runsHistoryList test runs in a project
get_run_test_casesHistoryGet test cases and statuses from a specific run
get_test_executionsHistoryGet execution history for a project
get_test_configurationConfigurationGet test config and environment settings