AI-Assisted Testing: Deploying the AI Agents
Software testing has always involved monotonous tasks: writing, reviewing, and automating test cases…
What’s more, projects are becoming increasingly complex, more and more code is being written, and less and less time is available to prepare and run our test cases. Time constraints often compromise the quality of test cases, making them difficult to automate.

AI-Assisted Testing: Does It Work?
No wonder that when AI appeared, many of us wondered: could it help testers in their daily work? (Spoiler: yes, it can.)
Initially, we used generative AI. It was fine for generating ideas and test case outlines, but its limitations soon became apparent:
- it does not work independently,
- it works reactively (rather than proactively),
- it focuses only on subtasks,
- it lacks a systematic approach,
- and, moreover, it cannot be integrated into large enterprise systems.
We had to move on: we created our first test team consisting of AI agents.
Who are These AI Agents and What Do They Do?
Our AI testing team is made up of the following members:
- Test designer – writes general test cases
- Test case writer – writes specific test cases
- Test case reviewer – checks the test cases
Our AI agents no longer work on a question-and-answer basis. They respond to events, integrate into workflows, and solve tasks independently. They are able to work in a goal-oriented manner.
They use familiar tools in an integrated way – Excel, Jira, Confluence, Figma, Selenium, Playwright, etc.
The agents are able to collaborate not only on their own, but also through a workflow. The key steps in the process are as follows:

The process is coordinated using a JavaScript-based agent orchestrator. During the workflow,
- the agents read the data from a complex description,
- structure it,
- and write the test cases in a standard format, which makes them suitable for automated execution.
What Have We Learned?
Based on the specifications, test cases were created in a uniform, standardized format, with sufficient detail for automation and covering all requirements.
Test cases were also created during the process that would be easy to overlook in manual testing, such as API calls and token handling.
We saw that although AI takes on a lot of work, expert supervision is still essential. In addition, the role of (human) testers is also changing: the focus is shifting from writing test cases to prompting and supervising the process.
And perhaps most importantly, we have seen time savings of 20-30% in test case generation alone. Although every company, every development project, and every testing process is different, the introduction of AI-assisted testing can result in even greater time savings when applied to the entire process. This can also lead to significant cost savings.
What The Future Holds…
Our goal is not to build a ” out of the box” product. We are developing competencies and methodologies that enable us to offer flexible solutions tailored to the needs of a given environment, which can be adapted to any large enterprise process.
This is our way forward – and this is what makes AI truly valuable in software testing.
Is it also the case at your company that you struggle to meet testing deadlines? It would be worth a chat over a coffee,

