Postman repositioned around a platform solution this year — and the sales motion that came with it required real, running infrastructure. We didn’t have a demo engineering team, and no one gave us an environment. SEs were standing up local Docker containers and the shared account was a mess. I took it on: a centrally managed, cloud-hosted environment with real Kubernetes clusters, live API traffic, and every platform feature wired up and demo-ready — built solo, at a pace that would normally take a team.
The problem
Postman’s SE team had a fragmented demo landscape. There was no single owned environment — instead, a patchwork of individually maintained setups, personal workspaces bleeding into shared demo accounts, and services with names like “XYZ test” and “Liz - personal workspace” appearing in the API Catalog during customer calls.
Postman is an enterprise governance and API management platform. Walking into a deal and demoing governance over a messy, un-governed account was a fundamental credibility problem. We were arguing the value of our product while accidentally proving we didn’t use it ourselves.
There was also a velocity issue. New SEs spent weeks building their own demo services — time that should have been spent selling. And crucially, our demo accounts weren’t always in sync with what was actually shipping in production.
The solution needed to be centrally managed, production-realistic, and maintainable by a small team — while making every Postman feature genuinely demonstrable against real infrastructure generating real signals.
The build
The platform I designed — Brightbox — is a retail industry demo environment built on AWS, running real Kubernetes workloads on EKS, with production traffic flowing through AWS API Gateway and Postman Insights surfacing live endpoint health across every service.
This isn’t a mock. It’s not a simulated “lite” mode. Every service runs in a real cluster with real latency, real error rates, and real CI pipeline results. When you open the Postman API Catalog and see a p90 latency spike — it’s actual infrastructure responding to actual traffic. That authenticity is what makes the demo credible.
The retail vertical is the first of five planned industry accounts — FinTech, Healthcare, Insurance, Retail, and General SaaS — each mapped to a distinct cloud provider and set of compliance narratives relevant to enterprise buyers in that space.
Retail service inventory — 11 production services
| Service | Team | Port | Auth | Data | Exposure |
|---|---|---|---|---|---|
| Product Catalog API | Storefront | 3001 | API key | none | Public (API GW) |
| Pricing & Promotions API | Storefront | 3002 | API key | none | Public (API GW) |
| Cart API | Checkout | 3003 | OAuth2 | pii | Public (API GW) |
| Checkout API | Checkout | 3004 | OAuth2 | pci | Public + Stripe |
| Orders API | Orders | 3005 | OAuth2 | pii | Public (API GW) |
| Order Events Contract | Orders | — | Broker | pii | Spec-only artifact |
| Inventory API | Inventory | 3006 | mTLS | none | Internal only |
| Fulfillment Orchestrator API | Fulfillment | 3007 | mTLS | pii | Internal only |
| 3PL Connector API | 3PL | 3008 | OAuth2 | pii | Internal only |
| Shipping Webhooks Contract | 3PL | — | API key sig | pii | Spec-only artifact |
| Tracking & Exceptions API | Platform | 3009 | mTLS | pii | Internal only |
Postman implementation
Beyond the infrastructure, I configured the full Postman Enterprise feature stack across the demo account — the same feature set we sell into enterprise accounts. The goal: when an SE opens Brightbox, every capability is live, populated with realistic data, and ready to demonstrate.
Governance groups, system environments (dev/staging/prod), service ownership tagging, and multi-source discovery all configured — giving the 'birds-eye' enterprise view the platform promises.
Insights Agent deployed via Istio sidecar injection. Real p90 latency, error rates, and endpoint health surfaced in Postman — not mocked. Health check traffic filtered to keep signals meaningful.
Kubernetes CRD connects deployment events to Git commit metadata in the Catalog. When latency spikes, you can trace it to the exact commit that caused it — a compelling enterprise story.
All 11 services spec'd in OpenAPI 3.0. Two AsyncAPI event contracts for order and shipping webhooks. Git-connected workspaces sync specs from the repo on push.
postman spec lint with --fail-severity WARN blocks non-conforming specs from reaching production. collection run sends regression results to the cloud. workspace push keeps everything in sync.
Desktop app connected via Native Git. When a governance violation surfaces in the Catalog, Agent Mode summarizes the violation and proposes the fix — demoing AI-assisted API governance in real time.
AI-accelerated development
This entire environment was built solo. That’s only feasible at this level of quality because of how deliberately I used AI tools to compress the development lifecycle — not to cut corners, but to move through design, scaffolding, and iteration at a pace that would otherwise require a full team.
“The trick wasn’t just using AI — it was knowing when to prompt AI to produce prompts, rather than prompting it directly for outputs. That meta-layer compressed 11 services worth of work into a systematic, repeatable process and enabled me roll out these services within 3 days from concept to execution.”
— Liz Fedak, on the prompt generator approach
Outcomes & impact
SEs open Brightbox and see a clean, governed API Catalog with real services, real owners, and real runtime data — not personal workspaces and duplicate 'accounts-service' APIs. The platform models what we're selling, in the product we're selling it in.
New SEs no longer have to spend weeks building their own demo environments or rely on tools maintained by other teams that don’t always work. They onboard to Brightbox — learning one well-designed retail environment first, then branching into other verticals as they move upmarket. Weeks of setup becomes days of learning.
Demo storylines include intentional failure modes — spec lint failures blocking releases, async contract drift causing error spikes, latency regressions traced to deployment commits. Every Postman feature is showcased solving a real problem, not a contrived one.
The infrastructure proposal, RACI model, and implementation playbook are designed for a small team to maintain across five industry verticals. What I built for retail is the template: the same Helm charts, CI pipeline snippets, and governance rulesets replicate to FinTech, Healthcare, Insurance, and SaaS.
What’s next
Running the SE org through Brightbox next week. Demoing every Postman feature against live infrastructure, establishing the shared demo standards, and establishing the first cohort of vertical experts.
Retail vertical → full orgApply the same AI-accelerated approach to the FinTech account — EKS + API Gateway, Stripe PaymentIntents lifecycle, fraud event contracts in AsyncAPI, PCI governance groups in the Catalog.
EKS + Stripe + LambdaAKS + Azure API Management + FHIR-shaped Patient APIs. The vertical that resonates with healthcare buyers who need to see PHI governance and audit trail capabilities demonstrated against realistic services.
AKS + APIM + FHIR facadeWire up deployed Flows for incident intake — webhook-triggered ticket creation and runbook posting when contract drift or latency spikes are detected. Closes the loop on the 'governance at scale' narrative.
Flows + Jira automationThe bigger picture
The Brightbox project is, at its core, a product marketing initiative delivered through an engineering lens. The insight that drove it — that demoing governance in an ungoverned environment actively undermines positioning — is a product marketing insight. The solution required infrastructure, but the problem was narrative.
Enterprise software is bought through stories. Buyers need to see themselves in the demo. A clean Postman API Catalog with real ownership tags, real compliance tiers, and real CI enforcement tells a story that a hand-wavy screen share of mock data simply cannot. The environment is the message.
What I built is also a proof of what’s possible when AI is used strategically — not to replace judgment, but to compress the distance between an idea and a working system. The prompt generator, the scaffolding automation, the AI pair-programming through deployment: these are patterns that every technical team should be building into their workflows.