We moved our entire Postman collection to Bruno over a weekend and let Claude Code chew on the new files by Monday morning. By lunchtime, API docs were writing themselves.
/bruno-api path/to/request.bru and get ready-to-ship docs, TypeScript types, and React Query hooks.If you're exploring AI-first tooling and looking to streamline your API workflow, this guide walks you through our practical migration step-by-step. If you're already using Bruno, you'll learn how to help improve your existing workflow further using AI.
"The shift from Postman to Bruno in 2025 is really a shift in how we think about API documentation. It's no longer a separate artifact that gets out of sync. It's part of your codebase, reviewed like code, and enhanced by AI."
At Diversio, Postman had been our all-purpose API toolkit since the company's first endpoint shipped in 2018. Every engineer has owned a collection or three; shared environments lived in the cloud; QA and PMs could fire requests without touching the codebase.
, , and other vars kept requests DRY across local and cloud setups.As our API surface has increased and now we are working on more and more features at once (thanks to agentic coding), the workflow that once felt effortless began eating hours and hurting our productivity.
Postman still works but it has just slowed us down. Any change meant updating code and a JSON export no one liked opening. That lag became the bottleneck.
Bruno's Git-friendly plain-text format, and its ability to embed full Markdown docs, looked like a way out. The best part? AI agents can read .bru files like normal code, so automation suddenly became trivial and APIs are now part of our codebase and included in diffs during code reviews.
/// Postman
var json = JSON.parse(responseBody);
var token = json["access_token"];
pm.environment.set("auth_token", token);
// Bruno
var json = res.getBody();
var token = json.access_token;
bru.setEnvVar("auth_token", token);
// Postman
const payload = JSON.parse(atob(token.split('.')[1]));
// Bruno (Buffer.from works in Bruno's Node.js environment)
const payload = JSON.parse(Buffer.from(token.split('.')[1], 'base64').toString());
vars {
base_url: http://localhost:8000
api_key:
}
Bruno lets every request double as a mini README:
docs {
# User Authentication
`POST` `/api/v2/auth/login`
## Overview
Returns JWT tokens.
⚡ **Rate Limit**: 5/min per IP
...
}
Rich tables, code fences, emojis etc can be included. Because it's Markdown, Claude Code and other AI tools parse it effortlessly.
After migration, we organized our Bruno files by feature rather than by API version. Here's our structure:
bruno/
├── .env # Local secrets (git-ignored)
├── .env.example # Template for team members
├── .gitignore # Ensures .env stays local
├── environments/
│ ├── local.bru
│ ├── staging.bru
│ └── production.bru
├── auth/
│ ├── login.bru
│ ├── refresh_token.bru
│ └── logout.bru
├── users/
│ ├── get_profile.bru
│ ├── update_profile.bru
│ └── list_users.bru
├── analytics/
│ ├── dashboard_metrics.bru
│ └── export_reports.bru
└── integrations/
├── stripe/
│ └── create_payment.bru
└── webhooks/
└── incoming_webhooks.bru
Security tip: Always add .env to your .gitignore. Create a .env.example with dummy values so team members know what environment variables to set:
# .env.example
API_BASE_URL=http://localhost:8000
API_KEY=your-api-key-here
JWT_TOKEN=will-be-set-by-login-script
DEFAULT_COMPANY_ID=1234
TEST_USERNAME=testuser
TEST_PASSWORD=testpass
Each .bru file can include documentation, pre/post scripts, and environment variable references. This structure makes it easy to:
The script is available in this GitHub gist: migrate_postman_envs.py
Command
$ uv run migrate_postman_envs.py ./postman_environments/ ./bruno_environments/
Output
🔄 Processing 10 file(s)...
✅ Converted webhook_env.json → ./bruno_environments/webhook_env.bru
✅ Converted local_env.json → ./bruno_environments/local_env.bru
✅ Converted production.json → ./bruno_environments/production.bru
✅ Converted staging.json → ./bruno_environments/staging.bru
✅ Converted development.json → ./bruno_environments/development.bru
✅ Converted test_env.json → ./bruno_environments/test_env.bru
✅ Converted qa_env.json → ./bruno_environments/qa_env.bru
✅ Converted sandbox.json → ./bruno_environments/sandbox.bru
✅ Converted integration.json → ./bruno_environments/integration.bru
✅ Converted demo_env.json → ./bruno_environments/demo_env.bru
✨ Done! Converted 10/10 file(s)
The script can be found here: validate_bruno_files.py
Command
$ uv run validate_bruno_files.py ./bruno_environments/
Output
🔍 Validating 10 Bruno file(s)...
✅ demo_env.bru
✅ development.bru
✅ integration.bru
✅ local_dev.bru
✅ performance_test.bru
✅ production.bru
✅ qa_testing.bru
✅ sandbox.bru
✅ staging.bru
✅ user_acceptance.bru
📊 Summary: 10/10 file(s) valid
For the actual requests, here's a simple bash script:
#!/bin/bash
# convert_postman_scripts.sh
# Common replacements
sed -i 's/JSON\.parse(responseBody)/res.getBody()/g' *.bru
sed -i 's/atob(/Buffer.from(/g' *.bru
sed -i 's/pm\.environment\.set(/bru.setEnvVar(/g' *.bru
# Fix dictionary access patterns
sed -i 's/jsonData\["\([^"]*\)"\]/jsonData.\1/g' *.bru
Pro tip: Run this on a copy first. Some replacements might need manual review, especially if you have complex string patterns.
Add defensive checks when migrating:
// Old Postman way (often broke with null responses)
var id = JSON.parse(responseBody)["data"]["id"];
// Better Bruno pattern
var response = res.getBody();
if (response && response.data && response.data.id) {
bru.setEnvVar("resource_id", response.data.id);
} else {
console.error("Unexpected response structure:", response);
}
If you have Postman tests:
// Postman test
pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});
// Bruno test
test("Status code is 200", function () {
expect(res.getStatus()).to.equal(200);
});
Here's where things get exciting. I created a custom Claude slash command that analyzes our Bruno files and generates comprehensive documentation by inspecting our Django codebase.
Instead of maintaining documentation scripts, we taught Claude to understand our codebase. Here's how the actual command works:
# Custom Claude Command: /bruno-api
When the user types `/bruno-api [bruno-file-path]`, you will:
1. **Parse the Bruno File**
- Extract the HTTP method, endpoint URL, headers, and body structure
- Identify authentication requirements (Bearer token, API key, etc.)
- Note any pre/post-request scripts for context
2. **Reverse Engineer the Backend**
- Use the endpoint URL to find the Django URL pattern:
path('api/v2/users/', UserViewSet.as_view())
re_path(r'^api/v1/reports/(?P<pk>\d+)/$', ReportDetailView.as_view())
- Locate the corresponding view/viewset class
- For Django Ninja endpoints, find the router and operation functions
3. **Deep Code Analysis**
- Extract serializer fields, types, validation rules
- Identify permission classes and authentication requirements
- Trace through the view method to understand:
- Query parameters and filtering
- Data transformations
- External service calls
- Error conditions
4. **Generate Comprehensive Documentation**
Including:
- Full API endpoint documentation
- TypeScript interfaces for request/response
- React Query hooks with error handling
- Authentication requirements
- Business logic notes (caching, rate limits, etc.)
- Common error scenarios
The sophistication comes from how Claude connects all the pieces:
# Claude's Analysis Process:
1. Bruno file says: GET /api/v2/analytics/inclusion-scores/
2. Find in urls.py: path('api/v2/analytics/inclusion-scores/', InclusionScoresView.as_view())
3. Find InclusionScoresView class
4. Analyze the get() method:
- What serializer? InclusionScoresSerializer
- What permissions? IsAuthenticated + HasAnalyticsAccess
- What does it do? Aggregates survey data with demographic breakdowns
5. Check serializer fields and validation
6. Find related models and business logic
7. Generate complete, accurate documentation
The magic: This is more than just brittle script parsing AST. Claude Code understands our code semantically, follows imports, and comprehends business logic and can inspect multiple aspects of an API.
Input: Simple Bruno file from Postman migration - see this basic file (just endpoint + auth)
Command: /bruno-api bruno/analytics/user_metrics.bru
Output: Claude analyzes the Django codebase and generates comprehensive documentation. Here's a small sample:
// TypeScript Interfaces (auto-generated from Django serializers)
interface UserMetricsResponse {
count: number;
next: string | null;
results: Array<{
user_id: string;
email: string;
last_active: string;
total_sessions: number;
sessions_this_month: number;
avg_session_duration: string;
status: 'active' | 'inactive';
role: string;
department: string | null;
}>;
}
// React Query Hook (with error handling derived from Django views)
export const useUserMetrics = (companyId: string, params?: UserMetricsParams) => {
return useQuery({
queryKey: ['user-metrics', companyId, params],
queryFn: async () => {
const response = await apiClient.get(
`/api/v2/companies/${companyId}/user-metrics/`,
{ params }
);
return response.data;
},
enabled: !!companyId,
retry: (failureCount, error: any) => {
// Smart retry logic based on Django view error handling
if (error?.response?.status === 401 || error?.response?.status === 403) {
return false;
}
return failureCount < 3;
}
});
};
This is just a fraction of the output. See the complete generated documentation which includes:
The key insight: Claude reads the actual implementation, so the documentation is always accurate. It can still make mistakes, so it's always critical to review the files and nudge it in right direction.
This approach works for any framework:
When someone changes an API, reviewers can see it in the PR:
git diff api/users/create.bru
+ body:json {
+ {
+ "email": "",
+ "role": "",
+ "department": "" // New field added
+ }
+ }
Breaking changes are caught before deployment, not after.
New and existing team members can ask:
Claude reads your Bruno collections and provides accurate answers and vice-versa.
Since docs live with code, they're more likely to stay updated. We are working on a pre-commit hook that is going to remind developers to update Bruno files when an API related change is made.
We did our entire migration over a weekend, and you can too. Here's what worked for us:
/bruno-api templateThe real buy-in happens when developers see:
.bru files to your repo and update PR templates"Can QA still use it?" → Yes, Bruno has a UI too (and it's free).
"What if we need to go back?" → Keep Postman exports for 30 days, but we never looked back.
Here's a complete example you can adapt:
# .claude/commands/bruno-api.md
You are an API documentation expert for our [Framework] application.
When user types /bruno-api [file-path]:
1. Read the Bruno file at the specified path
2. Extract: method, URL, headers, body structure
3. Find the implementation:
- For Express: Find app.get/post/put in routes/
- For Django: Find path() in urls.py, then view
- For Rails: Find route in config/routes.rb
4. Analyze the handler/controller to determine:
- Required parameters and validation
- Authentication/authorization
- Response structure
- Error cases
5. Generate documentation including:
- Clear description of what the endpoint does
- Request/response examples with real data
- [Your frontend framework] integration code
- Common errors and how to handle them
Use our conventions:
- TypeScript for all interfaces
- Include data validation rules
- Show rate limits if applicable
- Note any side effects (emails, webhooks, etc.)
Implementation tip: Start with one endpoint type (e.g., CRUD operations) and expand from there.
The shift from Postman to Bruno in 2025 is really a shift in how we think about API documentation. It's no longer a separate artifact that gets out of sync. It's part of your codebase, reviewed like code, and enhanced by AI.