Vulnerability assessment workflow¶
This guide walks you through a complete vulnerability assessment workflow using yorishiro-proxy, from initial reconnaissance to result analysis. The workflow is designed for AI-agent-driven testing, where the agent controls the proxy through MCP tools.
Workflow overview¶
A typical vulnerability assessment follows five phases:
- Reconnaissance -- capture traffic from the target application
- Macro design -- build reusable multi-step workflows for stateful operations
- Payload selection -- choose non-destructive payloads for the vulnerability type
- Testing -- execute single-shot tests with
resend, then scale withfuzz - Analysis -- review results, compare responses, and report findings
Phase 1: Reconnaissance¶
Start the proxy with a focused scope¶
Limit capture to the target application to reduce noise:
// proxy_start
{
"listen_addr": "127.0.0.1:8080",
"capture_scope": {
"includes": [{"hostname": "target.example.com"}],
"excludes": [
{"hostname": "static.example.com"},
{"url_prefix": "/assets/"}
]
},
"tls_passthrough": ["*.googleapis.com", "*.gstatic.com"]
}
Tip
If the target is behind a WAF (e.g., Cloudflare), set "tls_fingerprint": "chrome" to avoid JA3/JA4-based bot detection. This is the default.
Configure safety boundaries¶
Before testing, set up target scope rules and diagnostic budgets to prevent unintended impact:
// security
{
"action": "set_target_scope",
"params": {
"allows": [
{"hostname": "target.example.com", "ports": [443], "schemes": ["https"]}
],
"denies": [
{"hostname": "admin.target.example.com"}
]
}
}
Set rate limits and a diagnostic budget:
// security
{
"action": "set_rate_limits",
"params": {
"max_requests_per_second": 10,
"max_requests_per_host_per_second": 5
}
}
// security
{
"action": "set_budget",
"params": {
"max_total_requests": 1000,
"max_duration": "30m"
}
}
Configure SafetyFilter¶
For production assessments, enable SafetyFilter in your config file to block destructive payloads before they reach the target:
{
"safety_filter": {
"enabled": true,
"input": {
"action": "block",
"rules": [
{"preset": "destructive-sql"},
{"preset": "destructive-os-command"}
]
},
"output": {
"action": "mask",
"rules": [
{"preset": "credit-card"},
{"preset": "email"}
]
}
}
}
The input filter blocks destructive SQL statements (DROP TABLE, TRUNCATE, etc.) and OS commands (rm -rf, shutdown, etc.). The output filter masks PII in responses returned to the AI agent while preserving raw data in the flow store.
Capture traffic¶
Route your browser or HTTP client through the proxy and interact with the target application. Then review captured flows:
Inspect a specific flow for request/response details:
Detect technology stack¶
Check what technologies the target is using:
This returns detected web servers, frameworks, languages, and CDN/WAF information per host.
Phase 2: Macro design¶
If the target requires authentication, CSRF tokens, or involves non-idempotent operations, define macros to automate the setup and teardown for each test iteration.
Define a pre-send macro¶
This macro logs in and extracts a session cookie before each test request:
// macro
{
"action": "define_macro",
"params": {
"name": "auth-setup",
"description": "Login and extract session cookie",
"steps": [
{
"id": "login",
"flow_id": "<login-flow-id>",
"override_body": "username=testuser&password=testpass",
"extract": [
{
"name": "session_cookie",
"from": "response",
"source": "header",
"header_name": "Set-Cookie",
"regex": "SESSION=([^;]+)",
"group": 1,
"required": true
}
]
},
{
"id": "get-csrf",
"flow_id": "<csrf-page-flow-id>",
"override_headers": {"Cookie": "SESSION={{session_cookie}}"},
"extract": [
{
"name": "csrf_token",
"from": "response",
"source": "body",
"regex": "name=\"csrf\" value=\"([^\"]+)\"",
"group": 1
}
]
}
]
}
}
Define a post-receive macro¶
Clean up after each test (e.g., logout to avoid session limits):
// macro
{
"action": "define_macro",
"params": {
"name": "teardown",
"description": "Logout after test",
"steps": [
{
"id": "logout",
"flow_id": "<logout-flow-id>",
"override_headers": {"Cookie": "SESSION={{session_cookie}}"}
}
]
}
}
Test the macro¶
Run the macro standalone to verify it works:
Check the kv_store in the response to confirm that session_cookie and csrf_token were extracted correctly.
Phase 3: Payload selection¶
Choose payloads based on the vulnerability type you are testing. Always use non-destructive payloads that do not modify target data.
Payload guidelines by vulnerability type¶
| Vulnerability | Strategy | Safe payloads |
|---|---|---|
| IDOR | Swap user IDs and check access | Integer ranges (1 to 20) |
| SQLi (time-based) | Inject SLEEP and measure duration_ms |
' OR SLEEP(3)--, 1; WAITFOR DELAY '0:0:3'-- |
| SQLi (error-based) | Trigger SQL syntax errors | ', '', 1' AND 'a'='b |
| XSS (reflected) | Inject marker tags and check escaping | <KTP_TAG>test</KTP_TAG>, <img src=x onerror=KTP_XSS> |
| CSRF | Remove or invalidate tokens | Empty string, invalid-token-value |
| Auth bypass | Remove or replace auth headers | Empty Authorization, Bearer invalid |
Warning
Never use destructive payloads like DROP TABLE, DELETE FROM, rm -rf, or unconditional OR 1=1 on write endpoints (POST/PUT/PATCH/DELETE). Use time-based or error-based techniques for safe detection on these methods.
Phase 4: Testing¶
Step 1: Single-shot test with resend¶
Start with a single request to verify your test setup:
// resend
{
"action": "resend",
"params": {
"flow_id": "<target-flow-id>",
"override_headers": {
"Authorization": "Bearer <other-user-token>"
},
"tag": "idor-single-test"
}
}
Use dry-run to preview changes before sending:
// resend
{
"action": "resend",
"params": {
"flow_id": "<target-flow-id>",
"override_headers": {"Authorization": "Bearer <other-user-token>"},
"dry_run": true
}
}
Step 2: Compare responses¶
Compare the original and modified requests structurally:
// resend
{
"action": "compare",
"params": {
"flow_id_a": "<original-flow-id>",
"flow_id_b": "<modified-flow-id>"
}
}
This shows differences in status codes, headers, body length, timing, and JSON key-level changes.
Step 3: Scale with fuzzing¶
Once the single-shot test works, run a fuzz campaign for comprehensive coverage:
// fuzz
{
"action": "fuzz",
"params": {
"flow_id": "<target-flow-id>",
"attack_type": "sequential",
"positions": [
{
"id": "pos-0",
"location": "body_json",
"json_path": "$.user_id",
"payload_set": "user-ids"
}
],
"payload_sets": {
"user-ids": {"type": "range", "start": 1, "end": 20}
},
"concurrency": 1,
"rate_limit_rps": 5,
"tag": "idor-fuzz"
}
}
For stateful operations, attach hooks:
// fuzz
{
"action": "fuzz",
"params": {
"flow_id": "<target-flow-id>",
"attack_type": "sequential",
"positions": [
{
"id": "pos-0",
"location": "body_json",
"json_path": "$.id",
"payload_set": "ids"
}
],
"payload_sets": {
"ids": {"type": "range", "start": 1, "end": 50}
},
"hooks": {
"pre_send": {"macro": "auth-setup", "run_interval": "always"},
"post_receive": {"macro": "teardown", "run_interval": "always"}
},
"concurrency": 1,
"tag": "stateful-fuzz"
}
}
Phase 5: Analysis¶
Review fuzz results¶
Get the results with aggregate statistics:
// query
{
"resource": "fuzz_results",
"fuzz_id": "<fuzz-id>",
"sort_by": "status_code",
"limit": 100
}
The response includes a summary with:
statistics-- status code distribution, body length and timing distributions (min, max, median, stddev)outliers-- result IDs that deviate from the baseline by status code, body length, or timing
Filter for outliers¶
Quickly find anomalous responses:
Filter by specific criteria¶
// query
{
"resource": "fuzz_results",
"fuzz_id": "<fuzz-id>",
"filter": {"status_code": 200, "body_contains": "admin"}
}
Inspect individual results¶
Drill into a specific fuzz result by its flow ID:
Vulnerability determination¶
| Vulnerability | Indicator |
|---|---|
| IDOR | 200 response with another user's data |
| SQLi (time-based) | duration_ms increases by ~3000ms for SLEEP payloads |
| SQLi (error-based) | 500 status or SQL error messages in response |
| XSS (reflected) | Unescaped payload tags in response body |
| CSRF | Request succeeds (200/302) without a valid token |
| Auth bypass | Request succeeds without valid credentials |
Cleanup¶
After testing, delete macros and clean up flows:
Export results¶
Save your findings before cleanup:
// manage
{
"action": "export_flows",
"params": {
"format": "har",
"filter": {"url_pattern": "/api/"},
"output_path": "/tmp/assessment-results.har"
}
}
AI agent interaction pattern¶
When working with an AI agent like Claude Code, you can describe your testing goals in natural language:
- "Test this API endpoint for IDOR vulnerabilities by swapping user IDs 1 through 20"
- "Check if the login endpoint is vulnerable to SQL injection using time-based blind techniques"
- "Verify that CSRF protection is working on all POST endpoints"
- "Fuzz the search parameter for XSS with reflected payload markers"
The agent will use the appropriate MCP tools to execute each phase of the workflow.
Related pages¶
- API testing -- focused guide on REST API testing
- Target scope -- configuring security boundaries
- SafetyFilter -- blocking destructive payloads
- Fuzzer -- fuzzer feature reference
- Macros -- macro feature reference