Skip to Content
DocumentationPipelineRunning Pipelines

Pipeline Execution

How to run pipelines, view results, and handle errors.


Running a Pipeline

1. Prepare Your Pipeline

Make sure:

  • Input nodes have data
  • Step nodes are configured
  • Nodes are connected (green lines show connections)

2. Click Run

Click the Run control in the top toolbar.

Current button behavior:

  • Idle: a play icon
  • Running: a spinning sync icon on a danger-style button

What happens:

  1. Pipeline engine builds a dependency graph from connections
  2. Steps are sorted into execution order (topological sort)
  3. Each step executes in sequence
  4. Results propagate through connections

3. Watch Execution

Visual feedback:

  • Nodes update their execution state as the run progresses
  • Step and output views show status symbols and duration labels when available

4. View Results

Click any step node to see its output:

  • Input node — shows source data
  • Step node — shows transformation result
  • Output node — shows final pipeline result

Execution States

StateAppearanceMeaning
IdlePipeline or node has not run yet
RunningStep is currently executing
SuccessStep completed successfully
ErrorStep failed with an error

Viewing Results

Result Display Modes

Tree View — Hierarchical, expandable JSON

  • Click ▶/▼ to expand/collapse objects and arrays
  • Hover over fields to see their JSON path
  • Search highlights matches in the tree
  • Great for exploring structure and finding specific values

Text View — Raw JSON with syntax highlighting

  • Full JSON text with color coding
  • Line numbers for reference
  • Find/replace supported
  • Great for copying, searching, and manual editing

What result viewing gives you

When you inspect a step or output node, you can:

  • Review the current output in Tree View
  • Switch to Text View for raw JSON/text
  • See error messages for failed execution
  • Confirm whether a step produced output at all

Error Handling

Common Errors

Step shows an error state? Click it to see the error.

“Invalid JSON”

  • Check your Input node data
  • Ensure JSON is properly formatted
  • Use a JSON validator to find syntax errors

“Missing required field”

  • Open the step’s configuration panel
  • Fill in all required fields (marked with *)
  • Check field formats (paths, expressions, etc.)

“Path not found”

  • Verify the JSON path exists in your data
  • Use the Tree View to explore your data structure
  • Check for typos in field names

“Utility failed”

  • Read the error message carefully
  • Check if input data matches what the utility expects
  • Try simplifying your pipeline to isolate the issue

Debugging Strategies

1. Check Each Step

  • Click each step to see its output
  • Find the first step that produces unexpected results
  • Fix that step before continuing

2. Simplify

  • Remove complex steps
  • Test with smaller datasets
  • Add steps back one at a time

3. Use Console

  • Open browser DevTools (F12)
  • Check the Console tab for error messages
  • Look for stack traces that pinpoint issues

4. Re-run

  • Sometimes transient issues occur
  • Click Run again to see if it clears
  • If it fails consistently, it’s a real error

Performance Considerations

Execution Speed

Fast pipelines:

  • Small datasets (< 1MB)
  • Simple transformations (filter, pick fields)
  • Few steps (< 10)

Slower pipelines:

  • Large datasets (> 10MB)
  • Complex transformations (aggregate on large arrays)
  • Many steps (> 20)

Optimization Tips

💡 Pipeline execution runs in a Web Worker — a separate thread from the main browser interface. This keeps the UI responsive even during complex transformations.

  1. Filter early — Remove unnecessary data as early as possible
  2. Pick fields first — Reduce dataset size before complex operations
  3. Avoid redundant steps — Don’t process the same data multiple times
  4. Use appropriate utilities — Some utilities are optimized for specific tasks

Large Datasets

For datasets > 50MB:

  • Consider preprocessing data
  • Break into multiple smaller pipelines
  • Use utilities that stream data when possible

For datasets > 100MB:

  • Pipeline may take significant time
  • Browser may show “slow script” warning
  • Consider server-side processing instead

Re-running Pipelines

Re-run after changes

After changing step configuration, input data, or connections, run the pipeline again to refresh the outputs.

Manual re-run

Click Run to execute the pipeline again.

Each run starts fresh — Pipeline execution is stateless. Previous run results don’t affect the next run.

What you can inspect after a run

The pipeline UI can show:

  • Current success or error state per node
  • Step-level execution durations where available
  • Output data for successful steps

Background Execution

Web Worker Architecture:

Pipeline execution happens in a Web Worker — a separate thread from the main browser interface. This means:

UI stays responsive — Interface doesn’t freeze during execution ✅ Large datasets work — Can process more data without browser lag ✅ Multiple pipelines — Can run multiple pipelines simultaneously

Step Output Storage:

Results are stored using a 3-tier system:

  1. OPFS (Origin Private File System) — Fastest, desktop Chrome/Edge
  2. IndexedDB — Fallback for browsers without OPFS
  3. Memory — Final fallback

This ensures outputs persist and can be loaded on-demand.


Next Steps

Last updated on