User Guide

How to Use XBRL DeltaView

This guide is for filing QA teams and engineering contributors who need deterministic validation workflows, fast triage, and reliable compare behavior.

Overview

#

What DeltaView does and how the core objects in the interface relate.

XBRL DeltaView is a developer-first quality assurance workflow for validating XBRL and iXBRL filings. You upload an artifact, receive a durable job ID, and review normalized issues that are easier to triage than raw validator logs.

The application is designed for repeated QA loops where teams need deterministic behavior: the same data should sort and group the same way every time, and terminal job outcomes should be explicit.

  • Job: one asynchronous validation run tied to one uploaded artifact.
  • Issue: one normalized validation item with severity, humanized explanation, and extracted references.
  • Taxonomy insight: a separate local heuristic signal for suspicious extension patterns.
  • Compare result: run-to-run delta grouping for issues and document blocks.

Security baseline

Uploaded filings are treated as untrusted input. Validation executes in worker-isolated engine context instead of host execution.

Quick Start

#

A practical five-minute path from upload to compare.

  1. Step 1: Go to Upload

    Open `/upload`, drag and drop or browse for `.xml`, `.html`, or `.zip` artifacts up to 50MB.

  2. Step 2: Create Job

    Submit the file to create a validation job. You will be redirected to `/jobs/{job_id}` immediately.

  3. Step 3: Monitor Status

    Watch lifecycle status (`queued`, `running`, `succeeded`, `failed`) while polling continues until a terminal state.

  4. Step 4: Diagnose

    On success, review issue counters, apply severity and text filters, inspect details, then open document and taxonomy sections.

  5. Step 5: Compare

    Use `/compare` to diff this run against another completed run and inspect new/resolved/persisting/changed outcomes.

Fast loop tip

Use export output after each run so external QA trackers can stay aligned with on-screen issue states.

Upload Workflow

#

Input requirements, validation behavior, and upload UX details.

Upload accepts `.xml`, `.html`, and `.zip`. Validation checks happen before processing so unsupported artifacts fail fast with explicit messaging.

Upload progress is displayed in-page. The UI preserves the selected file context and reports actionable errors if upload fails.

  • Maximum file size: 50MB.
  • Drag-and-drop and keyboard-triggered file selection are both supported.
  • On success, the app navigates directly to the new job page.
  • On failure, an inline error message is shown and the user can retry with a corrected artifact.

Pre-submit checks

  • Extension validation uses file name suffix checks for accepted types.
  • Oversize files are rejected client-side before network upload.
  • Errors clear when a new file is selected.

After submit

  • Upload progress updates continuously where the browser supports computable lengths.
  • Successful responses include `job_id` and initial status.
  • Failed responses show message text from API error envelope when available.

Job Status Lifecycle

#

How job states work and what terminal outcomes mean.

Each job moves through deterministic lifecycle states. The page polls until a terminal state or the polling budget is exhausted.

A successful job can still contain errors and warnings in issue payloads. Failed jobs indicate system or engine failures in execution.

  • Valid transitions: `queued -> running -> succeeded` or `queued -> running -> failed`.
  • Polling cadence starts at 1s, backs off to 5s, and stops by 5 minutes if terminal state is not reached.
  • Failed jobs can expose `error_summary` and raw engine output references for diagnosis.

Interpretation rule

`succeeded` means the engine completed and produced results, not that the filing is clean.

Issue Triage

#

How to use counters, filters, and issue detail fields effectively.

The issue triage experience is built around fast prioritization. Start with severity counters, then use text search and toggles to narrow to a working set.

Selecting an issue opens detailed metadata intended to reduce time-to-diagnosis: human-friendly explanation plus raw message and extracted references.

  • Counters summarize `error`, `warning`, and `info` counts.
  • Search matches title, explanation, raw message, message code, and concept QName.
  • Severity toggles are combinable and can be used with search.
  • Detail panel exposes concept/context/unit/entity plus optional period, dimensions, and location blocks.

Recommended triage order

  1. Filter to errors first

    Resolve high-severity blockers before warnings and informational items.

  2. Use message code clusters

    Group repeated failures by `message_code` to avoid duplicate root-cause analysis.

  3. Confirm context fields

    Check concept, period, dimensions, and location hints to map each issue back to filing data.

Document Pane

#

Renderable behavior, location highlighting, and fallback states.

When the uploaded artifact is renderable as iXBRL HTML, the document pane loads a sandboxed viewer and attempts to highlight selected issue locations.

Location resolution is best-effort. If an issue lacks resolvable location data, the app reports that explicitly without breaking navigation.

  • Renderable artifacts display in iframe viewer with constrained script permissions.
  • Issue selection triggers postMessage highlight commands.
  • If highlight resolution fails, user receives non-blocking guidance.
  • Non-renderable artifacts show deterministic reason text.

Fallback behavior is expected

XML-only or ambiguous-entrypoint artifacts can still be fully triaged through issue data even when rendered document view is unavailable.

Taxonomy Insights

#

How taxonomy extension heuristics differ from validation issues.

Taxonomy insights are separate from normalized validation issues. They highlight suspicious extension patterns using local deterministic rules.

Treat these as QA signals that guide deeper review of extension strategy, namespace usage, and schema/linkbase references.

  • Insights include severity, code, title, explanation, and structured evidence.
  • Evidence payload is expandable for detailed inspection.
  • No insight does not imply taxonomy design is optimal; it means local rules found no suspicious pattern.

Compare Runs

#

Run setup modes and interpretation of issue/document diff outputs.

Compare supports two target modes: use an existing target job ID or upload a new target artifact and wait for terminal completion automatically.

Results are split into issue diff and document diff tabs so users can correlate semantic validation changes with rendered filing changes.

  • Issue statuses: `new`, `resolved`, `persisting`, `changed`.
  • Document change types: `added`, `removed`, `modified`, `unchanged`.
  • Change filters can be toggled to focus review scope.
  • Side-by-side viewer supports synchronized scroll behavior for renderable pairs.

Setup requirements

  • Both base and target runs must exist.
  • Both runs must be terminal before compare endpoints return success.
  • Upload target mode waits for terminal status before compare request.

Fallback expectations

  • If one side is non-renderable, document diff shows explicit fallback messaging.
  • Issue diff remains available even when document diff is degraded.

Exports and Raw Output

#

When to use structured export versus raw engine artifact.

Use export for normalized, product-level data intended for downstream automation, QA reports, and comparison with UI outputs.

Use raw engine output when debugging parser behavior, engine failure semantics, or low-level validator messaging not yet humanized.

  • Export endpoint returns job metadata plus normalized issues and taxonomy insights.
  • Raw output is useful when diagnosing `failed` jobs or unexpected normalization behavior.
  • UI action panel on job page provides direct links to both resources.

Troubleshooting

#

Common failure patterns and direct remediation guidance.

Upload validation failures

  • Unsupported file type: only `.xml`, `.html`, `.zip` are accepted.
  • Oversize artifact: reduce package size to 50MB or less.
  • Upload transport failure: retry and check API availability/network conditions.

Job failed summaries

  • `TIMEOUT`: reduce file complexity or increase runtime configuration where available.
  • `TAXONOMY_RESOLUTION`: verify taxonomy URLs and network access from worker environment.
  • `ZIP_SLIP`: repackage archive with safe relative paths.
  • `INVALID_ZIP`: regenerate archive to ensure clean extraction.
  • `ENTRYPOINT_NOT_FOUND`: include a valid HTML entrypoint for iXBRL rendering scenarios.
  • `ENTRYPOINT_AMBIGUOUS`: provide one clear entrypoint file or adjust package structure.

Document unavailability reasons

  • `ENTRYPOINT_NOT_FOUND`: no renderable HTML entrypoint was identified.
  • `ENTRYPOINT_AMBIGUOUS`: multiple candidate HTML entrypoints detected.
  • `ENTRYPOINT_OUTSIDE_ROOT`: selected entrypoint resolved outside artifact root.
  • `ENTRYPOINT_MISSING`: persisted entrypoint reference no longer available.
  • `UNSUPPORTED_ARTIFACT`: artifact type is not renderable in viewer.
  • `DOCUMENT_UNAVAILABLE`: generic fallback when render source cannot be served.

Compare failures

  • Missing IDs: provide both base and target identifiers (or upload target file).
  • Non-terminal runs: compare endpoints require both jobs to be `succeeded` or `failed`.
  • Invalid pairings: ensure IDs reference existing jobs and expected compare combinations.

Escalation path

If behavior appears inconsistent with UI contract, capture job ID, export payload, and raw output reference before escalating.

Technical Appendix

#

Concise endpoint reference and response usage notes for engineering users.

POST/jobs

Create a validation job from multipart file upload.

Returns durable `job_id` and initial status (`queued`).

GET/jobs/{job_id}

Fetch job status and metadata for lifecycle tracking.

Includes `error_summary` and `raw_output_artifact_ref` for failed jobs.

GET/jobs/{job_id}/issues

Retrieve normalized issue collection for triage UI and export parity.

Supports optional severity/search filtering parameters.

GET/jobs/{job_id}/taxonomy-insights

Retrieve taxonomy insight signals separate from issue list.

Payload includes evidence objects for each insight.

GET/jobs/{job_id}/document

Read document renderability metadata and entrypoint references.

Contains boolean renderability and fallback reason fields.

GET/jobs/{job_id}/export

Export job metadata plus normalized issues and taxonomy insights.

Designed as downstream portable artifact.

GET/jobs/compare/issues

Compare issue deltas between base and target jobs.

Groups output into `new`, `resolved`, `persisting`, and `changed`.

GET/jobs/compare/document

Compare document blocks between base and target jobs.

Includes change types and renderability fallback semantics.

FAQ

#

Operational answers to recurring user questions.

Does `succeeded` mean there are no validation errors?

No. `succeeded` means the engine run completed and produced results. Issues may still include errors and warnings.

Why can I see issues but not a rendered document?

Issue triage works for non-renderable artifacts too. Document pane requires a valid renderable HTML entrypoint and may degrade with explicit reason messaging.

When should I use compare upload mode instead of target job ID mode?

Use upload mode when you want to validate a new artifact immediately and compare it against an existing baseline in one flow.

How do I share findings externally?

Use export JSON for normalized data; include job ID and raw output link when escalating unexpected engine or normalization behavior.

Can guide analytics break the UI if no analytics sink is configured?

No. Guide analytics use the same no-op-safe abstraction as other app events.