“We only have one test environment, so verification doesn’t move forward” ── Where does that problem really come from?
“We only have one pre-release test environment, so testing is hard to do and releases take a long time.”
Have you heard this in your own team?
We hear it a lot in the field, and we often find ourselves holding our heads thinking, “We need to do something about this.”
Using this one sentence as a starting point, let’s dig into it together from the team’s perspective: “Why does this problem occur?” and “What can we do to make it even a little easier?”
What does “we only have one test environment” actually look like?
This is a real story from one team.
Development was going smoothly, and they were finally entering the phase of final checks before release.
However, there was only one staging environment available.
Multiple features being developed in parallel interfered with each other, and verification didn’t go well.
On top of that, the same environment was also used by the QA team, so they couldn’t just deploy whenever they wanted.
They tried to do checks late at night or early in the morning. They just waited around for someone else’s work to finish — as this “waiting in line” for the environment became part of everyday life, one engineer casually muttered the line from the beginning.
Verification doesn’t move forward because we only have one test environment.
The people involved and their respective positions
This problem actually isn’t as simple as “just add one more environment and it’s solved.”
People in various roles are facing this situation from their own perspectives.
Let’s briefly organize the positions and viewpoints of the stakeholders here.
👨💻 Engineer (development)
Role: The central role that writes product code, including implementing new features, fixing bugs, and doing verification.
What they’re thinking:
“Even if I fix the program, I can’t check it right away. I’m stuck waiting my turn for the environment, and I can’t finish my tasks…”
🔍 QA Engineer (quality assurance)
Role: A specialist who checks before release whether features and UI behave according to the specifications and finds defects.
What they’re thinking:
“It’s a problem if other changes get deployed while I’m in the middle of verification. I’d like to carefully verify things in a stable environment if possible…”
👔 Engineering Manager (managing the development team)
Role: Responsible for managing engineers’ task progress and motivation, allocating resources, and the release schedule.
What they’re thinking:
“I want engineers to develop comfortably, but resources are limited. I can’t afford to let progress stall…”
📋 Project Manager (PM)
Role: Manages the overall progress of the development team, coordinates with stakeholders, and handles and decides on release schedules.
What they’re thinking:
“It’s hard to see who has finished what and what’s needed next… It’s difficult to make release decisions…”
🧑🔧 Infrastructure / SRE (Site Reliability Engineer)
Role: Designs, builds, and maintains the infrastructure needed for development and operations. A behind-the-scenes pro who supports stable operation of staging and production environments.
What they’re thinking:
“I understand the desire to add more environments. But adding them costs money and effort, and with the current setup it’s hard to scale…”
As you can see, everyone has valid reasons, and in reality it’s not easy to just say, “Okay, let’s prepare one more environment!”
What is the “true face” of this problem?
On the surface it looks like “we don’t have enough environments,” but it feels like there are deeper, more fundamental issues lurking underneath.
For example, you could rephrase it like this:
“The development team lacks a system that lets them run the checks they need, when they need them.”
And behind that, there seem to be structural bottlenecks such as:
- Processes that can’t keep up with parallel development
- Vague rules for operating environments
- Mechanisms that haven’t caught up with the increased number of people and features
Do you hear “similar lines” in your own workplace?
- “I can’t verify because someone else is using staging.”
- “I just want to verify my own changes, but other features are deployed and things are broken.”
- “Work stops because of arguments over the deployment order to the environment.”
- “Something got deployed while I was checking the nightly batch…”
Each one seems like a “minor inconvenience,” but as they pile up, development speed definitely drops.
The essence isn’t the “environment” itself
The number of test environments is just the tip of the iceberg.
Underneath are important but less visible elements like:
- How the development organization is designed
- Flexibility of the verification process
- Scalability of environments
- Coordination costs between teams
Causes – thinking in terms of hypotheses
“Why did we end up in this situation?”
Based on the teams and workplaces we’ve seen so far, here are some “common backgrounds” we’ve felt, framed as hypotheses.
Hypothesis 1: The staging environment has become something “special”
The staging environment is treated as “the last bastion close to production,” and deployments require requests or approvals, so it’s handled too cautiously.
As a result, you end up in a situation where “you can’t use it freely whenever you want” and “you have to wait your turn to verify.”
Hypothesis 2: There’s no technical mechanism to create environments dynamically
Even if you want to prepare multiple environments, they’re configured manually, so reproducing them is difficult or time-consuming.
This is a common pattern when CI/CD (Continuous Integration / Continuous Delivery) isn’t in place or hasn’t been templatized.
Hypothesis 3: Infrastructure costs and resource estimates were too optimistic
The project started with the thought, “One environment should be enough for now,” but the scale of development and the number of developers grew beyond expectations.
This is especially common in startups or early-stage projects, where future scalability often isn’t fully considered.
Hypothesis 4: Responsibility for environment management is ambiguous
It’s unclear who is allowed to use the environment and under what rules it’s managed.
As a result, stress builds up from things like “I want to use it but don’t know if I’m allowed to touch it” or “my changes get overwritten without warning,” which further increases inefficiency.
Solutions: approaches commonly used in the field
Here we’ll introduce “approaches that have actually been effective in other workplaces.”
None of them are magic bullets that solve everything at once, but we’ve felt that small improvements can add up and significantly reduce environment-related stress.
Solution 1: Introduce an on-demand environment mechanism
Example: Build it by combining GitHub Actions × Terraform × Vercel/Netlify, etc.
✅ Makes parallel development easier
✅ Reduces competition for the staging environment
Solution 2: Turn infrastructure into IaC (Infrastructure as Code)
✔️ Makes it easy to duplicate environments
✔️ Makes it easier to build small test environments
Solution 3: Clarify environment usage rules and schedules
Even if you can’t physically add more environments, clarifying “who uses it and when” can sometimes reduce conflicts.
Using Google Calendar or spreadsheets can be effective as a temporary measure.
Solution 4: Expand what can be verified locally or in mock environments
Reduce environment dependency as much as possible and improve design and tooling so developers can complete work locally, thereby lowering the frequency of staging usage.
Example: Mock APIs with MSW (Mock Service Worker), etc.
Start here: four small improvements you can begin right away
The idea of “fully automating environments all at once!” is ideal, but in reality, “what do we start with?” is often the hardest question.
From what we’ve tried so far, here are some improvements that felt “good as a first step.”
Consider an operation where you prepare a “temporary environment per Pull Request”
At first, you don’t need a full mechanism; even a simple operation where you allocate the staging environment in time slots is fine.
Once you have some breathing room, introduce an on-demand environment mechanism.
Start keeping “environment usage logs”
Use Slack, Notion, or a spreadsheet — anything is fine — to visualize “who is using it, when, and for what.”
Unexpected overwrites and conflicts will decrease significantly.
Agree in advance on “what gets deployed to staging”
Organize the granularity of features and clarify the units of verification, such as “these features can be tested together without issues.”
Gradually expand what can be checked with mocks or locally
Use API mocks, UI component testing with Storybook, data simulation with Faker libraries, etc., to gradually expand the scope of work that doesn’t require the test environment. These efforts add up over time.
In closing
Problems with test environments tend to be put off, but when you look back, they’re often “the most painful part.”
Developers, QA, managers, infrastructure engineers ──
For everyone, simply having an environment where you can “build and try things with peace of mind” can dramatically change the development experience.
We ourselves are still in a constant cycle of trial and error.
But if some of the things we’ve struggled with and found effective can serve as hints for another team somewhere, we’d be glad.
By the way… about the support we provide
In fact, the kinds of improvements and mechanisms introduced in this article ──
- Automatic construction of on-demand environments
- Improving environment reproducibility through IaC
- Setting up and supporting CI/CD operations
- Systematizing mocks and data simulation
- Designing and helping establish an environment strategy that fits the team
We actually work on these kinds of support together with client companies.
“If we’re struggling now but don’t know where to start”
“We want to find an approach that fits our team’s culture and size”
In times like that, feel free to reach out to us.
We’d be happy to help you from the stage of thinking it through together.
Related Articles
Building a Mock Server for Frontend Development: A Practical Guide Using @graphql-tools/mock and Faker
2024/12/30Streamlining API Mocking and Testing with Mock Service Worker (MSW)
2023/09/25Frontend Test Automation Strategy: Optimizing Unit, E2E, and API Tests with Jest, Playwright, and MSW
2024/01/21Complete Guide to Web Accessibility: From Automated Testing with Lighthouse / axe and Defining WCAG Criteria to Keyboard Operation and Screen Reader Support
2023/11/21Robust Authorization Design for GraphQL and REST APIs: Best Practices for RBAC, ABAC, and OAuth 2.0
2024/05/13