Problem Moment IO Locals Request State Meta PII Transport Package

What Errorcore Captures and Why It Matters

A plain-language explanation of what happens when your code breaks in production. errorcore.dev

Contents

  1. The Problem
  2. What Happens at the Moment of Failure
  3. The IO Timeline
  4. Local Variables at the Throw Site
  5. Request Context and ALS
  6. State Tracking
  7. Process and Environment Metadata
  8. PII Scrubbing
  9. Encryption and Transport
  10. The Error Package
  11. What This Means for You

01. The Problem

An error fires in production. You get a stack trace and maybe a log line. You know the line number. You know the error message. But you don't know why.

What did the database return right before the crash? What did the HTTP request body look like? What was sitting in memory? The stack trace doesn't tell you any of that. So you start guessing. You grep through logs, check monitoring dashboards, and try to reproduce the failure locally. You set up the same inputs, the same config, and hope the conditions are close enough.

Most of the time, they aren't. Production has state that your laptop doesn't. Users send payloads you didn't think of. Databases return edge-case rows. The environment is different in a dozen small ways, and any one of them could be the reason. You're debugging blind.

02. What Happens at the Moment of Failure

When errorcore catches an error, it doesn't just record the stack trace. It assembles a full snapshot of what the application was doing at that exact instant. The network calls in flight, the variables in scope, the HTTP request that triggered the code path, the state the app was holding in memory.

Think of it as a black box recorder for your Node.js process. A plane's black box doesn't just record the final seconds. It records everything that led up to the event. Errorcore works the same way. By the time the error is thrown, errorcore has already been watching the IO, tracking the request, and holding references to the relevant state.

The sections below walk through each piece of captured data in detail.

03. The IO Timeline

What Gets Captured

Every outbound HTTP request your application made. Every database query, with the actual SQL text and bound parameters. Every DNS lookup. All recorded with start time, end time, and duration. Errorcore captures this through Node.js diagnostics_channel and by hooking into database drivers at the connection level. You get a full timeline of "what happened before the crash" without adding a single log statement yourself.

Chronological Order

The timeline is ordered chronologically. If your app called an API, then queried a database, then made another API call, and then crashed, you see all four events in sequence with their results and timing. Slow queries stand out immediately. Failed HTTP calls are obvious. You can trace the exact chain of IO that led to the error.

Why This Matters Most

This is often the most valuable piece of the capture. Most production errors are caused by unexpected IO results, and without the timeline, you'd never see what the app actually received.

HTTP GET /api/users 200 OK 12ms DB: SELECT * FROM users 3 rows 8ms DNS: payments.api.com resolved 45ms HTTP POST /payments 500 Internal 320ms Error thrown PaymentError: upstream failure

04. Local Variables at the Throw Site

V8 Debugger Integration

Errorcore hooks into the V8 debugger protocol to read the values of local variables in the active stack frames when an exception is thrown. If the error says "Cannot read property 'email' of undefined", the local variable capture shows you that user was undefined because the database query returned zero rows. And you can cross-reference that against the IO timeline to see the exact query that ran.

Captured at Throw-Time

The capture happens at throw-time, not after. Variables get reassigned, scopes close, garbage collection runs. If you try to inspect locals after the fact, the values are gone. Errorcore grabs them at the exact moment the exception fires, before any cleanup occurs.

What You See

You get the variable names, their types, and their values, serialized to a safe depth. Objects are captured with their properties. Arrays include their contents. Functions are represented by name. This turns a cryptic error message into a clear picture of what went wrong.

Stack Frame: processUser() user undefined query 'SELECT...' retries 2 userId 'usr_8a3f' timeout 5000 throw site line 42 user.email TypeError Cannot read

05. Request Context and ALS

Automatic Request Tracking

Errorcore uses Node.js AsyncLocalStorage to track which incoming HTTP request triggered the error. Every async operation that runs within the lifecycle of a request is automatically associated with that request. No manual correlation IDs needed. No middleware that you have to remember to wire up.

The Full Request Picture

When you look at a captured error, you see the original request: method, URL, headers, query parameters, and body. You also see every IO operation that happened during that request's lifecycle, in chronological order. If the request was a POST /checkout, you see the auth check, the inventory lookup, the payment call, and exactly which one failed.

Async Propagation

ALS propagation means this works across async/await boundaries, through Promises, and into callbacks. Even if your code fans out into parallel operations, errorcore ties them all back to the originating request.

06. State Tracking

Observed State Containers

Errorcore can observe application-level state containers: Maps, plain objects, or any structure you register with it. When an error occurs, the capture includes the state reads that happened during the request. You see the data your application was actually working with at the time of the failure.

A Practical Example

Say a user object gets pulled from an in-memory cache and has an unexpected null value in its role field. The error fires downstream when something tries to check permissions. Without state tracking, you'd see the permission error but have no idea what the cached object looked like. With state tracking, the captured data shows you exactly what was pulled from the cache and what its fields contained.

Complex In-Process State

This is especially useful for applications with complex in-process state, like session stores, feature flag caches, or configuration objects that get reloaded at runtime. The state snapshot tells you what the application believed to be true when it made the decision that led to the error.

07. Process and Environment Metadata

System Metrics

Each capture includes system-level context: heap memory usage, RSS, event loop lag, the count of active handles and requests, Node.js version, platform, architecture, and the process uptime. If the error happened while the process was under memory pressure or the event loop was backed up, that context is right there in the capture.

Deployment Context

Container and deployment metadata is included too. The container ID, the git SHA of the running code, and environment variables (filtered through PII scrubbing, covered below). If you're running multiple versions across a fleet, the git SHA tells you instantly which build produced the error.

Reading the Numbers

Memory at 95% tells a very different story than memory at 30%. An event loop lagging by 800ms means the process was already struggling before the error. These numbers help you separate "the code has a bug" from "the infrastructure was in a bad state."

08. PII Scrubbing

Built-In Scrubbing

All captured data passes through PII scrubbers before it leaves the process. Authorization headers get redacted. Passwords, tokens, API keys, and credit card numbers are stripped out. High-entropy strings that look like secrets are flagged and removed. This runs by default. You don't enable it. You can't disable it.

Custom Scrubbers

The scrubbing layer supports custom rules for your domain. If your application handles medical record IDs, social security numbers, or any field that's sensitive in your specific context, you register a custom scrubber. It runs alongside the built-in ones, and it applies to all data sources: IO payloads, local variables, request bodies, and state reads.

Data Never Leaves the Process Raw

Because scrubbing happens inside the process before serialization, sensitive data never touches the wire. It doesn't end up in your error collector, your file output, or your stdout stream. The raw data exists only in process memory for the brief moment errorcore is assembling the capture.

09. Encryption and Transport

AES-256 Encryption

Error packages can be encrypted with AES-256 before leaving the application. You provide the key. Errorcore encrypts the serialized package and sends the ciphertext. If someone intercepts the payload in transit, they get encrypted bytes.

Transport Modes

Transport supports three modes: HTTP to a collector endpoint, file output to a local directory, or stdout for piping into whatever ingestion system you already run. These can be combined. You can write to a local file and send over HTTP at the same time.

Dead Letter Store

When the HTTP collector is unreachable, packages go into a local dead letter store. They queue on disk and get forwarded when the connection comes back. You don't lose error data because of a network blip or a collector restart. The queue has configurable size limits so it won't fill your disk if the collector is down for an extended period.

10. The Error Package

One Document, Everything Inside

Everything described above gets assembled into a single JSON document. One file. One object. It contains a schema version, a timestamp, the full error details with stack trace, the local variables from each frame, the request context, the IO timeline, state reads, process metadata, the code version, environment info, completeness flags that indicate whether any data was truncated, and an integrity signature.

Versioning and Completeness

The schema is versioned. Older packages can be read by newer tooling. The completeness flags tell you if the IO timeline was capped at a limit or if variable serialization hit a depth boundary. The integrity signature lets you verify that the package wasn't tampered with after capture.

One document. One place to look. No jumping between log aggregators, metrics dashboards, and trace viewers to reconstruct what happened.

error_package.json Error Details + Stack Local Variables IO Timeline Request Context State Reads Process Metadata Integrity Signature

11. What This Means for You

When you get an error alert, you open one page and see everything. The request that triggered it, the IO that happened, the variables in scope, the state the app was holding, the system conditions at the time. You don't need to reproduce the bug. You don't need to correlate timestamps across three different tools. The answer is in the package.

The captured data often tells you the root cause directly. A null user because the database returned no rows. A payment failure because the upstream API timed out. A type error because a config value was missing in that specific environment. You see the cause, not just the symptom.

Debugging production errors goes from "I'll try to reproduce this locally" to "I can see exactly what happened." That's the difference errorcore makes. Not more dashboards. Not more log lines. Just the data you need, captured at the moment it matters.

Let's Connect

Socials

Location

Bangalore, India