Verity
/Docs

Error Handling

Verity's SDK uses structured error classes so you can catch specific failure types and respond appropriately. All errors extend VerityError, so you can also catch everything with a single handler.

Error Hierarchy

VerityError (base)
  ├── VerityApiError           — non-2xx API response
  ├── LeaseConflictError       — 409, another agent holds the lease
  ├── EffectPreviouslyFailedError — effect already failed, reset required
  ├── CommitUncertainError     — action succeeded, commit failed (CRITICAL)
  ├── VerityConfigError        — missing required config
  └── VerityValidationError    — input data invalid or too large

CommitUncertainError

This is the most critical error in Verity. It means your action was executed successfully (money was charged, email was sent, VM was created) but Verity could not record the result. Do not retry blindly.

When It Happens

CommitUncertainError is thrown when:

  1. Your act() function ran successfully and returned a result
  2. The SDK tried to commit the result to Verity (with up to 3 retries for transient errors)
  3. All commit attempts failed (network error, server error, timeout, or validation error)

The action happened. Verity just doesn't know about it yet.

Properties

PropertyTypeDescription
effectKeystringWhich effect is in the uncertain state
resultunknownThe value returned by act() — the action's result
commitErrorunknownThe error that prevented the commit from being recorded

How to Handle It

// TypeScript
import { CommitUncertainError } from '@verityinc/sdk';

try {
  await verity.protect('charge:order_123', {
    act: () => stripe.charges.create({ amount: 5000 }),
  });
} catch (error) {
  if (error instanceof CommitUncertainError) {
    // 1. HALT — do not retry protect()
    // 2. Log the result for manual reconciliation
    console.error(`CRITICAL: Action succeeded for ${error.effectKey}`);
    console.error(`Result: ${JSON.stringify(error.result)}`);
    console.error(`Commit error: ${error.commitError}`);

    // 3. Alert your ops team
    alertOpsTeam({
      type: 'commit_uncertain',
      effectKey: error.effectKey,
      result: error.result,
    });

    // 4. Check Explorer UI or query API to reconcile
    // The effect may show as RUNNING (lease active) or will
    // eventually expire. An admin can manually commit or reset.
  }
}
# Python
from verity import CommitUncertainError

try:
    await verity.protect("charge:order_123", act=execute_charge)
except CommitUncertainError as e:
    logger.critical(
        f"CRITICAL: Action succeeded for {e.effect_key}. "
        f"Result: {e.result}. Commit error: {e.commit_error}"
    )
    alert_ops_team(e)
    # DO NOT RETRY — check Explorer to reconcile
Why is this a separate error? Because the correct response is fundamentally different from other failures. For most errors, retrying is fine. For CommitUncertainError, retrying could cause a duplicate action (the original action already happened). The explicit error type forces you to handle it differently.

LeaseConflictError

Thrown when another agent currently holds the lease for an effect (HTTP 409).

When It Happens

  • onConflict: 'throw' was set — the SDK throws immediately on 409
  • onConflict: 'retry' (default) and all retry attempts were exhausted

Properties

PropertyTypeDescription
effectKeystringWhich effect has the conflict
bodyunknownThe 409 response body from the API

How to Handle It

import { LeaseConflictError } from '@verityinc/sdk';

try {
  await verity.protect('charge:order_123', { act: chargeCustomer }, {
    onConflict: 'throw',  // don't retry, fail fast
  });
} catch (error) {
  if (error instanceof LeaseConflictError) {
    // Another agent is handling this effect — safe to back off
    console.log(`Effect ${error.effectKey} is being processed elsewhere`);
  }
}

In most cases, the default retry behavior handles conflicts automatically. You only need to catch this error if you're using onConflict: 'throw' or want to handle exhausted retries.

EffectPreviouslyFailedError

Thrown when the effect already failed on a prior attempt and the result is cached. This is not a bug — the action legitimately failed. An admin must reset the effect before it can be retried.

Properties

PropertyTypeDescription
effectKeystringWhich effect failed
effectIdstringThe effect's internal ID
cachedErrorunknownThe original failure details

How to Handle It

import { EffectPreviouslyFailedError } from '@verityinc/sdk';

try {
  await run.protect('process_refund', { act: executeRefund });
} catch (error) {
  if (error instanceof EffectPreviouslyFailedError) {
    console.log(`Effect ${error.effectKey} previously failed:`, error.cachedError);
    // Options:
    // 1. Log and alert — wait for admin to reset in Explorer
    // 2. Programmatically reset via the admin API (if you have org_admin key)
    // 3. Skip this step and continue with a fallback
  }
}
Why not auto-retry? Because the effect genuinely failed (e.g., Stripe declined the card). Automatically retrying would hit the same failure. The admin reset is a deliberate checkpoint — fix the root cause first, then reset.

VerityApiError

Thrown when the Verity API returns a non-2xx status code (other than 409, which becomes LeaseConflictError).

Properties

PropertyTypeDescription
statusCodenumberHTTP status code
bodyunknownResponse body from the API
requestIdstring?Request ID for support (if present in response)

Common Status Codes

CodeMeaning
400Bad request — invalid effectKey, payload too large, etc.
401Invalid or missing API key
403Insufficient permissions or namespace frozen
404Effect or namespace not found
422Fence token mismatch — stale lease
500Internal server error (transient — SDK retries commit/fail automatically)

VerityConfigError

Thrown when a required configuration value is missing:

  • baseUrl not provided
  • apiKey not provided
  • namespace required but not set (either in config or per-call)

Fix: provide the missing value in the VerityClient constructor or per-call params.

VerityValidationError

Thrown when input data fails validation:

  • inputJson is not JSON-serializable (circular references, BigInt, functions)
  • inputJson or result exceeds 64 KB

Fix: ensure your payloads are plain JSON objects within the size limit.

Comprehensive Error Handling

Here's a production-ready error handling pattern:

import {
  VerityError,
  CommitUncertainError,
  EffectPreviouslyFailedError,
  LeaseConflictError,
  VerityApiError,
} from '@verityinc/sdk';

async function safeProtect() {
  try {
    return await verity.protect('charge:order_123', {
      observe: checkExistingCharge,
      act: executeCharge,
    });
  } catch (error) {
    // Priority 1: Commit uncertainty — the action happened
    if (error instanceof CommitUncertainError) {
      await alertOpsTeam('commit_uncertain', error);
      throw error; // Don't swallow — caller must know
    }

    // Priority 2: Previously failed — needs admin reset
    if (error instanceof EffectPreviouslyFailedError) {
      logger.warn(`Effect ${error.effectKey} needs reset: ${error.cachedError}`);
      throw error;
    }

    // Priority 3: Lease conflict — another agent is on it
    if (error instanceof LeaseConflictError) {
      logger.info(`Effect ${error.effectKey} is being handled elsewhere`);
      return null; // or throw, depending on your use case
    }

    // Priority 4: API errors
    if (error instanceof VerityApiError) {
      logger.error(`Verity API error ${error.statusCode}:`, error.body);
      throw error;
    }

    // Priority 5: Any other Verity error
    if (error instanceof VerityError) {
      logger.error('Verity error:', error.message);
      throw error;
    }

    // Priority 6: Non-Verity error (your act() threw)
    throw error;
  }
}

Internal Retry Behavior

The SDK automatically retries certain operations on transient failures (5xx, network errors):

OperationRetriesDetails
Lease acquisition (409)Up to 12Exponential backoff, ±30% jitter, up to 15s delay
CommitUp to 3Only 5xx and network errors. 300/600/1200ms backoff.
Fail recordingUp to 3Same as commit. If all fail, error is logged but original error is thrown.
Lease renewalContinuousSelf-scheduling. Stops on 409/404/410/403 (lease lost).
Observe reporting0Fire-and-forget. Never blocks execution.

What's Next?

  • Core Concepts — understand the mechanics that produce these errors
  • Explorer UI — use the dashboard to investigate failed effects
  • Workflows — handling errors in multi-effect workflows