See what a real Argentic run looks like on a public codebase
Explore a real Argentic sample run against Azure-AI-RAG-CSharp-Semantic-Kernel-Functions. Inspect the coordinator index, tracker report, steward markdown reviews, and findings.json manifests from the exported run.
Public GitHub sample repository assessed with exported Argentic artifacts. Real Argentic run. Full artifact set mirrored into the site. No synthetic findings.
This page exposes the copied coordinator index, tracker report, all steward markdown reviews, and all findings manifests from the sampled run. Nothing in the artifact browser is synthetic.
31 markdown artifacts and 31 JSON artifacts are available here. The page keeps the summary readable while still exposing every copied output from the sampled run.
This is the exact artifact package a customer would review: coordinator summary, tracker baseline, steward findings, and raw manifests.
Tracker summary for this sample
This view reflects the exported tracker baseline bundled with the sample. Because this repository snapshot is the first recorded run, the tracker history currently contains one baseline point instead of a multi-run trend line.
This sample includes the exported tracker baseline from 2026-03-22. Because it is the first recorded run, the tracker history currently contains one point rather than a multi-run trend line.
This sample shows the exported baseline for the repository. Because it is the first recorded run, the tracker currently contains one point instead of a historical trend.
| Run | Date | Score | Critical | Notable | Minor | Info | Total |
|---|---|---|---|---|---|---|---|
BaselineCurrent | 2026-03-22 | 0 | 69 | 148 | 91 | 55 | 363 |
Browse every markdown artifact without leaving the page
The reader below now includes the coordinator index, tracker report, and every steward markdown review from the sample run.
Read real Argentic reports in one place
The reader below presents the real sample artifacts in a cleaner reading surface, with direct links to the original markdown whenever you want it.
Select a report from this collection.
Stewards Reviews Index.md
Full markdown for this report, rendered here with the structure preserved and a cleaner reading surface.
Every steward represented in the sample run, with direct links to both the markdown review and its matching findings.json artifact.
| Stack | Steward | Critical | Notable | Minor | Info | Total | Markdown | JSON |
|---|---|---|---|---|---|---|---|---|
| C# Backend | REST API | 3 | 7 | 4 | 2 | 16 | Open review | Open JSON |
| C# Backend | Interface Design | 0 | 6 | 3 | 2 | 11 | Open review | Open JSON |
| C# Backend | C# Unit Test | 1 | 3 | 2 | 1 | 7 | Open review | Open JSON |
| C# Backend | CosmosDB | 2 | 4 | 3 | 2 | 11 | Open review | Open JSON |
| C# Backend | API Observability | 3 | 5 | 3 | 3 | 14 | Open review | Open JSON |
| C# Backend | API Telemetry | 3 | 5 | 3 | 2 | 13 | Open review | Open JSON |
| C# Backend | API Config | 1 | 7 | 3 | 1 | 12 | Open review | Open JSON |
| C# Backend | API Resilience | 3 | 4 | 2 | 2 | 11 | Open review | Open JSON |
| C# Backend | .NET Best Practices | 2 | 5 | 6 | 2 | 15 | Open review | Open JSON |
| Cross-cutting | Security | 5 | 7 | 3 | 2 | 17 | Open review | Open JSON |
| React Frontend | React UX | 4 | 7 | 5 | 2 | 18 | Open review | Open JSON |
| React Frontend | React API Client | 3 | 6 | 2 | 2 | 13 | Open review | Open JSON |
| React Frontend | React UX Observability | 3 | 3 | 2 | 2 | 10 | Open review | Open JSON |
| React Frontend | React Telemetry | 3 | 2 | 2 | 1 | 8 | Open review | Open JSON |
| React Frontend | React DI | 0 | 3 | 2 | 2 | 7 | Open review | Open JSON |
| React Frontend | React Config | 0 | 3 | 3 | 3 | 9 | Open review | Open JSON |
| React Frontend | React SP Practices | 2 | 4 | 3 | 2 | 11 | Open review | Open JSON |
| React Frontend | React Auth | 2 | 3 | 1 | 2 | 8 | Open review | Open JSON |
| React Frontend | React UX Components | 1 | 6 | 5 | 2 | 14 | Open review | Open JSON |
| Infrastructure | Bicep Module | 2 | 9 | 6 | 2 | 19 | Open review | Open JSON |
| Infrastructure | Infra Security | 4 | 9 | 3 | 3 | 19 | Open review | Open JSON |
| Infrastructure | Infra Networking | 4 | 6 | 3 | 2 | 15 | Open review | Open JSON |
| Infrastructure | Infra Deployment | 3 | 4 | 3 | 2 | 12 | Open review | Open JSON |
| Infrastructure | Bicep Testing | 3 | 3 | 3 | 1 | 10 | Open review | Open JSON |
| Python Backend | Python Best Practices | 3 | 6 | 5 | 2 | 16 | Open review | Open JSON |
| Python Backend | Python Config | 0 | 4 | 4 | 2 | 10 | Open review | Open JSON |
| Python Backend | Python Observability | 1 | 5 | 3 | 2 | 11 | Open review | Open JSON |
| Python Backend | Python Resilience | 4 | 4 | 1 | 2 | 11 | Open review | Open JSON |
| Python Backend | Python Test | 4 | 3 | 1 | 2 | 10 | Open review | Open JSON |
Representative findings across backend, frontend, security, Python, and infrastructure
Each finding below is pulled from a real steward artifact in the broader sample run. IDs, severity labels, file paths, and recommendations are preserved exactly.
SessionController.GetSession() uses [HttpGet] to generate and return a new session ID. Although the handler itself does not write to a data store, the endpoint's semantic purpose is resource creation - it mints a new identity that clients then use to write data. Using GET for resource creation violates HTTP semantics (GET must be safe and idempotent) and may cause repeat invocations via browser prefetch, proxies, or caching.
Change to POST /sessions. Return 201 Created with a Location header pointing to the new session resource.
ChatService.GetResponseAsync returns JsonSerializer.Serialize(new { resp }), which is already a JSON string. ChatController.Post declares return type Task<string>, so ASP.NET Core serializes this string again as a JSON string literal. The HTTP response body is a JSON-encoded string not a JSON object. Every client must double-parse the response, which is an undocumented, non-standard contract.
Return the response object directly from ChatService (not pre-serialized). Change ChatController.Post return type to Task<IActionResult> and return Ok(new { response = result }).
The fetch to /chat has no .catch() handler. If the request fails (network error, 4xx, 5xx, CORS failure), the promise rejects silently. The user's message appears in the chat but no reply ever arrives, with zero explanation.
Add a .catch() handler that appends a visible error message bubble such as 'Sorry, something went wrong. Please try again.' and optionally surfaces a Retry button.
New AI responses are appended to the message list via React state, but there is no aria-live region. Screen reader users will not be alerted when a new message arrives. Additionally, messages carry no role, author label, or accessible distinction between user and AI messages.
Wrap the message list in a container with role='log' and aria-live='polite' and aria-label='Chat messages'. Add visually hidden author labels ('You:' / 'Agent:') to each bubble.
ChatHistory is registered as a singleton, meaning a single mutable conversation history object is shared across every HTTP request and every user session. Messages from one user's session accumulate alongside messages from all other users, causing data leakage and incorrect AI responses.
Remove the singleton registration for ChatHistory. Instantiate ChatHistory per request inside ChatService.GetResponseAsync, loading prior messages for the given session rather than injecting one shared service.
ChatLayout.js calls parse(obj.message) where obj.message is the raw HTML string returned by the AI model. html-react-parser does not sanitize HTML. The system prompt instructs the model to return HTML. A prompt injection attack could cause the model to return malicious HTML that is executed in the user's browser.
Pass the HTML through DOMPurify.sanitize() before calling html-react-parser. Alternatively, adopt a markdown rendering approach and instruct the model to return Markdown instead of raw HTML.
Both except blocks in Loader log the error but do not re-raise it. Azure Functions treats the invocation as successful, so failures are masked and any retry policy would never activate.
Add raise after logging in both except handlers so Azure Functions can propagate the failure and activate retry behavior when appropriate.
The CosmosDb_ConnectionString app setting is explicitly set to an empty string and the Cosmos DB endpoint is never forwarded to the API App Service. At runtime, any code that reads the connection string will receive an empty value and fail to connect.
Wire the database endpoint into the API app settings and use managed identity authentication. Remove the empty CosmosDb_ConnectionString app setting entirely.
The storage account sets publicNetworkAccess: 'Enabled' with no IP restrictions or VNet rules. Any internet client can reach the storage endpoint, exposing blob, queue, and table services.
Set publicNetworkAccess: 'Disabled' on the storage account and use private endpoints or VNet service endpoints for compute access.
No virtual network is deployed, so all compute-to-data connectivity traverses Azure's public network. There is no integration subnet for App Service outbound traffic and no private endpoint subnet for data services.
Add a VNet module with at least an integration subnet for App Service and Function App outbound routing and a private endpoint subnet for data service private endpoints.
Baseline context behind the exported tracker
The tracker above is a first-run baseline. This section stays grounded in that exported run and shows where the current concentration sits before any remediation or follow-up cycle begins.
The copied tracker artifacts show a true first-run baseline. Every current finding is classified as Baseline, and future reruns would add New, Fixed, Changed, and Unchanged classifications against this archive.
These are the highest steward concentrations in the latest run surfaced above.
| Steward | Findings | Share |
|---|---|---|
| Bicep Module | 19 | 5.2% |
| Infra Security | 19 | 5.2% |
| React UX | 18 | 5.0% |
| Security | 17 | 4.7% |
| REST API | 16 | 4.4% |
Argentic can roll findings up into readable operating signals, so buyers can see concentration, progress, and coverage without digging through raw data first.
Open every exported artifact directly
The in-page reader is the fastest way to browse. The grouped library below exposes the exact copied markdown and JSON artifacts behind the sample.
Coordinator & Tracker
The coordinator index, tracker report, and tracker JSON exports that frame the sample run.
Cross-steward index for the sample run, grouped by stack with direct review links.
Tracker baseline report showing per-steward counts, baseline classifications, and health-score logic.
Canonical per-steward totals exported by the tracker for the sampled run.
Tracker trend archive for this sample. The current export contains the first baseline run.
Steward Reviews
All 29 exported markdown reviews from the sample run.
Full markdown review exported by the REST API steward.
Full markdown review exported by the Interface Design steward.
Full markdown review exported by the C# Unit Test steward.
Full markdown review exported by the CosmosDB steward.
Full markdown review exported by the API Observability steward.
Full markdown review exported by the API Telemetry steward.
Full markdown review exported by the API Config steward.
Full markdown review exported by the API Resilience steward.
Full markdown review exported by the .NET Best Practices steward.
Full markdown review exported by the Security steward.
Full markdown review exported by the React UX steward.
Full markdown review exported by the React API Client steward.
Full markdown review exported by the React UX Observability steward.
Full markdown review exported by the React Telemetry steward.
Full markdown review exported by the React DI steward.
Full markdown review exported by the React Config steward.
Full markdown review exported by the React SP Practices steward.
Full markdown review exported by the React Auth steward.
Full markdown review exported by the React UX Components steward.
Full markdown review exported by the Bicep Module steward.
Full markdown review exported by the Infra Security steward.
Full markdown review exported by the Infra Networking steward.
Full markdown review exported by the Infra Deployment steward.
Full markdown review exported by the Bicep Testing steward.
Full markdown review exported by the Python Best Practices steward.
Full markdown review exported by the Python Config steward.
Full markdown review exported by the Python Observability steward.
Full markdown review exported by the Python Resilience steward.
Full markdown review exported by the Python Test steward.
Structured Findings Manifests
All 29 findings.json artifacts backing the sample, each with stable IDs and machine-readable evidence.
Structured findings manifest for REST API, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Interface Design, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for C# Unit Test, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for CosmosDB, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for API Observability, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for API Telemetry, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for API Config, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for API Resilience, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for .NET Best Practices, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Security, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React UX, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React API Client, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React UX Observability, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React Telemetry, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React DI, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React Config, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React SP Practices, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React Auth, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for React UX Components, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Bicep Module, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Infra Security, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Infra Networking, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Infra Deployment, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Bicep Testing, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Python Best Practices, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Python Config, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Python Observability, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Python Resilience, including stable IDs, severity, file paths, and recommendations.
Structured findings manifest for Python Test, including stable IDs, severity, file paths, and recommendations.
Cover the coordinator index, tracker report, and all 29 human-readable steward reviews.
Back every steward with stable IDs, file references, severity labels, and machine-readable recommendations.
Preserve the baseline archive, run summary, and trend-ready structure for future comparisons.
This is the kind of structured evidence the Assessment is designed to deliver
Use the sample as a proof point for how Argentic reads a real codebase, packages findings, and creates a baseline that can later evolve into recurring assurance and company-grounded governance.