Test Design & Organization
This section provides a comprehensive guide to organizing and managing your testing activities within the platform. It covers hierarchical organization, test case management, custom fields, automation features, and AI-powered capabilities. The platform is designed to make test management intuitive, collaborative, and efficient, whether you are working alone or as part of a team.
Hierarchical Test Organization
Testworthy provides multiple levels of organization to help you structure your testing in the most logical way for your project.
Test Suites
Test suites are used to group related test cases together, making it easier to manage and execute tests that share a common purpose or area of functionality. Organizing test cases into suites helps you keep your testing structured and ensures nothing is overlooked during test execution.
How to Organize Test Cases into Suites
- Creating a Test Suite: You can create a new test suite by providing a name and an optional description. Suites are always associated with a specific project.
- Editing a Test Suite: Existing suites can be updated to change their name or description as your project evolves.
- Deleting a Test Suite: If a suite is no longer needed, you can delete it. The system will prompt you to confirm this action to prevent accidental loss.
- Viewing Test Suites: All suites for a project are listed in the project dashboard. You can quickly see which test cases belong to each suite.
- Assigning Test Cases to Suites: When creating or editing a test case, you select the suite it belongs to. Each test case can belong to only one suite, ensuring clear organization.
Tip: Use suites to mirror your application's modules, features, or functional areas for maximum clarity.
Hierarchical Sections
Beyond test suites, Testworthy supports nested sections that allow you to create multi-level hierarchies within your test organization.
Creating and Managing Sections
- Nested Structure: Sections can have parent-child relationships, allowing you to create deeply nested organizational structures
- Display Ordering: Manually control the sort order of sections within their parent container
- Section Assignment: Test cases can be assigned to multiple sections for flexible organization
- Depth Tracking: The system automatically tracks the nesting level of each section
How to Use Sections
- Within a test suite, create sections to group related test cases by feature area or workflow
- Use parent sections for major features and child sections for specific components
- Assign test cases to the most specific section that applies
- Use the hierarchy view to navigate large test structures efficiently
Example Structure:
Test Suite: User Management
├── Section: Authentication
│ ├── Login Tests
│ └── Password Reset Tests
└── Section: User Profile
├── Profile Creation
└── Profile Updates
Custom Fields (Data Model)
The platform's data model supports custom metadata fields for test cases with the following field types:
Supported Field Types
- Text Fields: Free-form text input for descriptions or notes
- Number Fields: Numeric values for effort estimation or priority scoring
- Date Fields: Dates for deadlines, created dates, or milestone tracking
- Select Fields: Single-choice dropdowns with predefined options
- Multi-Select Fields: Multiple-choice selections for tags or categories
- Boolean Fields: True/false checkboxes for flags or status indicators
Field Configuration
- Per-Project Configuration: Custom fields are defined at the project level, allowing different field sets per project
- Required Fields: Fields can be marked as required to ensure important metadata is always captured
Test Cases
Test cases are the core of your testing process. Each test case describes a specific scenario to validate, including the steps to execute, the expected outcome, and other important details.
Creating Test Cases
- Start from the Project Dashboard: Navigate to the "Cases" tab within your project.
- Click "Add Test Case": This opens a modal form where you enter all details.
- Fill in Required Fields:
- Title: A concise name for the test case.
- Description: A detailed explanation of what the test covers.
- Steps: Step-by-step instructions for executing the test.
- Expected Result: What should happen if the test passes.
- Priority: Choose from predefined levels (e.g., Critical, High, Medium, Low) to indicate the importance.
- Type: Specify the type (e.g., Functional, Regression, Smoke).
- Suite: Assign the test case to a suite.
- Milestone (optional): Link to a project milestone if relevant.
- Assignee (optional): Assign responsibility to a team member.
- Save the Test Case: Once saved, the test case appears in the suite and project lists.
Viewing and Editing Test Cases
- View Details: Click on a test case to see all its information, including steps, history, comments, attachments, and related defects.
- Edit: Use the "Edit" button to update any field. Changes are tracked as new versions (see Test Case Lifecycle).
- Attachments: Upload files such as screenshots or logs to a test case for better documentation.
- Comments: Add comments to discuss issues, clarify steps, or provide feedback. All comments are tracked with the author's name and timestamp.
- Activity Log: Review a timeline of all actions taken on the test case, including edits, comments, and status changes.
Deleting Test Cases
- Delete Option: Test cases can be deleted if they are no longer relevant. The system will ask for confirmation before deletion.
- Impact: Deleting a test case also removes its history, comments, attachments, and execution records. This action cannot be undone.
Tip: Use the filtering and search options to quickly find test cases by title, priority, suite, milestone, or assignee.
Test Case Lifecycle
Understanding the lifecycle of a test case helps you maintain high-quality documentation and ensures traceability of changes.
Versioning and History
- Automatic Versioning: Every time you create or update a test case, the system saves a new version. This includes changes to the title, description, steps, expected result, and other fields.
- Viewing History: Access the "Versions" dropdown in the test case details to see all previous versions. Each version shows who made the change and when.
- Restoring Previous Versions: If you need to revert to an earlier version, simply select it and choose "Restore." The system will create a new version using the old data, preserving the full history.
Managing Attachments and Comments
- Attachments: Upload files directly to a test case. All attachments are organized and can be downloaded or deleted as needed. Supported file types include images, documents, and logs.
- Comments: Team members can add comments to discuss the test case, ask questions, or provide feedback. Comments are timestamped and attributed to the author.
- Activity Log: Every action—such as edits, comments, status changes, or attachments—is recorded in the activity log, providing a complete audit trail.
Tip: Use comments and attachments to keep all relevant information about a test case in one place, making collaboration easier.
AI-Powered Test Features
Leverage advanced AI to accelerate test creation and maintenance with intelligent automation.
Automated Test Generation
- From Code Upload: Upload ZIP files containing your codebase for automated project structure generation
- Smart Organization: AI automatically creates logical test suites and sections based on code analysis
- Comprehensive Coverage: Generate tests for functional, edge case, security, and performance scenarios
Test Case Healing
- AI-Powered Improvement: Analyze existing test cases and receive intelligent improvement suggestions
- Interactive Editor: Review and apply AI suggestions with full control over changes
- Version Control: All AI healing creates new versions, preserving the original test case
- Contextual Analysis: AI considers the entire project context when suggesting improvements
- Quality Enhancement: Improve test clarity, coverage, and maintainability
Automation Script Generation
Generate executable automation scripts directly from your test cases using AI, with full control over framework, language, and AI provider.
Supported Frameworks & Languages
| Framework | Supported Languages | Default Language |
|---|---|---|
| Playwright | TypeScript, JavaScript, Python, Java, C# | TypeScript |
| Selenium | Python, JavaScript, Java, C#, Ruby, Kotlin | Python |
| Cypress | JavaScript, TypeScript | JavaScript |
AI Provider & Model Selection
Choose the AI provider and model used for generation:
- Vertex AI (default) — Gemini 3.1 Pro Preview, Gemini 3.1 Flash Lite Preview, Gemini 2.5 Pro, Gemini 2.5 Flash
- Anthropic (via Vertex AI) — Claude 4.5 Haiku, Claude 4.5 Sonnet
- Llama (via Vertex AI) — Llama 4 Maverick 17B, Llama 4 Scout 17B
- Llama (via OpenRouter) — Llama 4 Maverick, Llama 4 Scout
Default max tokens: 16,000. Temperature is set to 0.1 for deterministic, consistent output.
Generated Script Features
- Smart Selectors: Prioritized selector strategy — role-based → test-id → label → placeholder → text → CSS (last resort)
- Auto-Waiting: Framework-appropriate waiting strategies (Playwright auto-wait, explicit waits for navigation/async)
- Assertions: Every expected result in the test case gets a corresponding assertion
- Error Handling: Comprehensive error handling with screenshot capture on failure
- Parameterization: Test data variables with environment configuration support
- Framework-Specific Imports: Correct imports and syntax for each framework and language
- Setup Commands: Dependency installation commands returned with the script (e.g.,
npm install @playwright/test) - Execution Command: Ready-to-run command provided (e.g.,
npx playwright test,pytest test_*.py,npx cypress run) - File Extension: Correct extension per framework/language (
.spec.ts,.cy.js,test_*.py,*Test.java,*Tests.cs, etc.)
How to Generate Scripts
- Open a test case with detailed steps and expected results
- Navigate to the Automation tab
- Click Generate Automation Script to open the generation dialog
- Select your preferred framework (Playwright, Selenium, or Cypress)
- Choose the target language from the dropdown (options vary by framework)
- Optionally add additional instructions to guide the AI (e.g., "use Page Object Model pattern")
- Select the AI provider and model if you want to override the defaults
- Click Generate — the AI processes the test case with project context and returns the script
- Review the generated script in the editor, along with:
- Setup commands (dependency installation)
- Execution command (how to run it)
- Dependencies list
- Recommended framework version
- Edit the script manually if needed
- Click Save Script to persist it (creates a version snapshot for history tracking)
Script Management
- One Script Per Framework: Each test case can store one script per framework (Playwright, Selenium, Cypress) simultaneously
- Save & Version: Saving a script creates a
TestCaseVersionsnapshot, so you can view and compare historical versions - Edit Mode: Manually adjust generated scripts before or after saving
- Copy to Clipboard: Quick copy of the full script text
- Delete Per Framework: Remove a script for one framework without affecting others
- Filter View: Filter saved scripts by framework (All / Playwright / Selenium / Cypress)
Script Execution via Runner
Generated scripts can be executed directly through the Testworthy Runner, which automatically detects the framework and language:
- Writes the script to a temporary file
- Runs the appropriate command (e.g.,
npx playwright test,pytest,npx cypress run,dotnet script) - Captures stdout/stderr output
- Returns pass/fail result with exit code
Credit Usage
Script generation consumes AI credits tracked as generate_automation_script operations. Credit usage (input/output token counts, estimated USD cost, AI provider, and model) is logged per generation. Generation is blocked if the user's credit quota is exhausted.
Credit Management
AI features consume credits based on usage. The platform uses a dual-level quota system to manage consumption.
- Dual-Level Quotas: Organization-level pool (based on subscription tier: 350–12,000 credits/month) distributed to individual user allocations by tenant owners
- Operation Tracking: Each AI operation (project import, chunk processing, test case healing, automation script generation, GitHub integration analysis) is recorded with input/output token counts, AI provider, model used, and estimated USD cost
- Monthly Reset: Monthly credit quotas reset on the 1st of each month; purchased credits persist across resets but expire after 365 days
- Purchased Credits: Buy additional credit packages via Stripe as one-time top-ups that expire after 365 days (FIFO deduction). Monthly quota is consumed first; purchased credits are used only after the monthly allocation is exhausted
- Quota Enforcement: Operations return a 429 error when both monthly and purchased credits are exhausted
- Cost Estimation: Preview estimated credit costs before running expensive operations like codebase import
Test Plans
Test plans are used to organize a set of test cases (and optionally suites) for a specific testing effort, such as a release, sprint, or feature validation. Plans help you track progress, ensure coverage, and manage execution efficiently.
Creating and Managing Test Plans
- Create a Test Plan: From the project dashboard, go to the "Plans" tab and click "Add Test Plan." Enter a name and description.
- Add Test Cases or Suites: After creating a plan, you can add individual test cases or entire suites. Use the "Add Cases" button to select from available cases.
- Remove Cases: If a test case is no longer needed in the plan, you can remove it at any time.
- Reorder Cases: Drag and drop test cases within the plan to set the desired execution order. This helps testers follow a logical sequence during execution.
- Bulk Add: You can add multiple test cases at once to a plan, streamlining setup for large test efforts.
- Edit Plan Details: Update the plan's name or description as needed.
- Delete a Plan: Plans can be deleted if they are no longer required. This does not delete the underlying test cases.
Using Test Plans in Execution
- Start a Test Run from a Plan: Once a plan is ready, you can initiate a test run based on the plan. This creates a new run with all the selected test cases, preserving their order.
- Track Progress: During execution, the system tracks the status of each test case (e.g., Passed, Failed, Blocked) and provides progress updates.
- Reporting: After completion, you can generate reports based on the plan, showing coverage, pass rates, and detailed results.
Tip: Use test plans to organize regression suites, release validation, or targeted testing for new features. Plans make it easy to repeat consistent test cycles and measure improvement over time.
By following these guidelines, you can ensure your testing is well-organized, traceable, and effective. The platform provides all the tools you need to manage the full lifecycle of your tests, from creation and grouping to execution and reporting. If you need more detailed instructions on any feature, refer to the specific help sections or reach out to support.