Role: Quality Assurance Engineer (AI-Native, Playwright + Claude Code)
Location: Remote (Work from Home)
Employment type: Full-time
Profile overview:
The company is hiring for a QA Engineer to drive automation-first, AI-assisted testing across our backend platform. You’ll spend most of your time building Playwright suites, designing AI agents, and shaping what quality looks like in an agent-driven SDLC — with manual testing reserved only for exploratory and edge cases automation can’t yet reach.
Responsibilities:
● Build and own automated test suites using Playwright for end-to-end, UI, and API testing across backend services and platform workflows.
● Use Claude Code (agents, skills, MCP servers) as a daily driver — generating test cases, test data, and edge-case scenarios from requirements, PRs, and production signals — shrinking test authoring time from days to hours.
● Design AI agents that own slices of the SDLC — from PR-triggered test design to autonomous regression triage and release sign-off.
● Build custom Claude skills and MCP servers that integrate QA workflows with Jira, CI/CD pipelines, test data stores, ScyllaDB/Kafka inspection tools, and observability platforms.
● Apply AI-driven defect prediction, root cause analysis, and flaky test detection to focus effort where it matters most.
● Embed automated quality gates into CI/CD (GitHub Actions) — including AI-powered
code review, test impact analysis, and regression triggers — to shift quality left.
● Validate datasets using SQL across transactional and analytical systems.
Requirements:
● Minimum 5 years of professional experience as a QA Engineer
● Strong hands-on experience with Playwright (or equivalent modern framework — Cypress, Selenium) for UI and API automation, including Page Object Model and TestNG (or similar) framework design.
● Proficiency in Java, JavaScript/TypeScript with solid API testing skills using Postman/Insomnia.
● Strong SQL for data validation, plus working knowledge of NoSQL databases (MongoDB, ScyllaDB, or DynamoDB).
● Working understanding of API/backend test lifecycles and exposure to the AWS cloud platform.
● CI/CD experience with GitHub Actions, Jenkins, or equivalent.
● Working fluency in LLM-powered development workflows (Claude Code, Cursor, Copilot, or similar) for test generation, debugging, and review.
● Grasp of AI/ML fundamentals — prompt engineering, context management, and evaluating LLM outputs for correctness and reliability.
● Agile/Scrum experience and clear async communication for remote, cross-functional work.
Preferred skillsets:
● Built MCP servers or custom Claude skills/tools to extend AI agent capabilities.
● Exposure to agentic test orchestration — autonomous agents that plan, execute, and adapt test runs based on code or requirement changes.
● Experience measuring AI-assisted productivity metrics (test creation velocity, coverage uplift, defect escape rate).
Are you an Armenian language expert eager to shape the future of AI? Large-scale language models are evolving from clever...
Apply For This JobAt Digio.in, we’re on a mission to revolutionize digital documentation, and our people are at the heart of this journey....
Apply For This JobPosition Overview We are seeking experienced and proactive Global IT Support to join our dynamic team. The successful candidate will...
Apply For This JobThis position is posted by Jobgether on behalf of a partner company. We are currently looking for a Director of...
Apply For This JobRequisition id: 1698690 The opportunity As a Consultant you will help the team deliver and contribute to the profitable growth...
Apply For This JobFull job description We are hiring individuals for simple mobile-based work that can be done from home. This opportunity is...
Apply For This Job