One governed model for every pipeline
A transparent pipeline model you can see, audit, and deploy safely
LyftData uses a deterministic Server → Workers → Jobs model to collect, shape, and route telemetry. Every stage is versioned, signed, and observable.
Traditional agents hide logic inside vendor UIs and per-host configs. With LyftData you can view the entire pipeline lifecycle — Job definition, signatures, Run & Trace output, and Worker execution — in one place.
Sources → Actions → Channels → Destinations
Sources
Part of the single Job graph
Actions
Part of the single Job graph
Channels
Part of the single Job graph
Destinations
Part of the single Job graph
Sources → Actions → Channels → Destinations
The living pipeline graph, defined entirely within a single Job.
Sources
Files, EDR exports, Windows Events, APIs, buckets — anything producing telemetry.
Actions
Filters, parsers, maskers, enrichers, and scripts applied per event inside the Job.
Channels
Fan-out lanes defined in the Job so the same data can feed multiple tools intentionally.
Destinations
SIEMs, observability platforms, analytics systems, archives — each gets exactly what it needs.
Core components
Server, Workers, Jobs, and Run & Trace
One control plane, declarative Jobs, stateless Workers, and a truth window to validate every run.
Server (Control Plane)
The source of truth for every pipeline.
- Stores signed Job definitions, versions, approvals, and lineage.
- Schedules deployments with signatures and keeps governance evidence.
Workers (Execution Engine)
Lightweight, stateless executors.
- Fetch signed Jobs, validate signatures, and run pipelines near your data.
- Scale horizontally without changing pipeline code or policy.
Jobs (Your Pipelines)
Signed, version-controlled definitions.
- Inputs, Actions, Channels, and Outputs live in one reviewable graph.
- Diff, sign, and promote Jobs between environments with confidence.
Run & Trace
Your truth window before production.
- See exactly what was dropped, masked, enriched, or routed at each Action.
- Use traces for validation, governance reviews, and incident response.
Pipeline walkthrough
Input → Filter → Mask → Enrich → Channel Split → Destinations
A single Job governs the entire flow before anything hits your tools.
Input
Read telemetry from EDR exports, APIs, buckets, or streams.
Filter
Drop duplicates and noisy carbon-copy events up front.
Mask
Mask employee IDs and sensitive fields before routing.
Enrich
Enrich IPs with threat intel or contextual metadata.
Channel Split
Fan-out traffic through Channels to feed multiple tools intentionally.
Destinations
Curated events flow to SIEMs while full-fidelity copies archive to S3.
Real pipeline scenario
Before LyftData vs After LyftData
The same EDR-to-SIEM-and-S3 pipeline, with predictable cost, governance, and auditability.
Before LyftData
Ingest from everywhere without visibility
Multiple regions and tools without a consistent pipeline.
Sensitive fields leak across tools
Masking is inconsistent and depends on per-host configs.
Cost balloons with noise
Duplicates and carbon copies hit expensive destinations.
No single trace of what happened
Routing and transformations are opaque when incidents occur.
After LyftData
One Job governs the flow
Inputs, Actions, Channels, and Outputs live in a signed, reviewable graph.
Mask and enrich upstream
IDs and emails are masked, IPs enriched before tools ever see them.
Intentional splits per tool
Curated events to the SIEM, full-fidelity copies archived predictably.
Trace every decision
Run & Trace validates the pipeline before production and during incidents.
Deployment model
Run LyftData where you already run your systems
Security, SRE, and Analytics teams all read from the same Job definition. Channels tailor outputs per tool while keeping policy centralized.
Deploy anywhere
Run Server once per environment and place Workers wherever data lives — on-prem, cloud, or VPCs.
Self-managed control plane
Signed bundles, approvals, and lineage stay in your infrastructure under your governance.
Workers near the data
Keep telemetry local, scale horizontally, and avoid per-tool agents or rewrites.
Build your first pipeline → Start Free Pilot
Spin up Server, add Workers, author your first Job, and Run & Trace it before production.
Walk through Server → Workers → Jobs in detail.
View Want to see what you can actually build?Explore the capabilities unlocked by this model.
View Check compatibility with your stackBrowse supported sources and destinations.
View