What Is OpenTelemetry Profiles — and How It Will Change Production Profiling
# What Is OpenTelemetry Profiles — and How It Will Change Production Profiling
OpenTelemetry Profiles is a new, emerging OpenTelemetry “signal” and OTLP data model for profiling data—meant to standardize how time-based stack-sample profiles are represented and transported alongside logs, metrics, and traces. In practice, it turns profiling into a first-class citizen of the OpenTelemetry pipeline: you can capture CPU/allocation-style hotspots as profiles, ship them through the OpenTelemetry Collector using OTLP, and correlate them with the rest of your observability telemetry with far less glue code and fewer vendor-specific formats.
OpenTelemetry Profiles, in plain terms
For years, production profiling has been powerful but awkward to operationalize across heterogeneous stacks: agents output one format, backends expect another, and correlation to traces/metrics is often an afterthought. OpenTelemetry Profiles (often “OTel Profiles”) aims to make profiling as portable and composable as the other three classic observability pillars. The OpenTelemetry project explicitly frames profiles as the “fourth signal,” complementing logs, metrics, and traces.
The key idea isn’t “a new profiler.” It’s a shared data model and transport: a standard way to describe profiles as collections of stack samples over time (with the metadata required to interpret them), and a standard way to send them—OTLP Profiles—through the same ingestion plumbing teams already use for their other OpenTelemetry signals.
How the data model and transport work (a technical primer)
OpenTelemetry Profiles has clear lineage: it was originally inspired by the pprof format and developed in collaboration with pprof maintainers, but it has been extended into an independent standard to meet the broader requirements of the OpenTelemetry ecosystem. Concretely, the schema lives in a dedicated proto definition—profiles.proto—in the opentelemetry-proto-profile repository, rather than being “just pprof over the wire.”
A profile as structured time-based stack sampling
At a high level, a profiling payload is a time-series-like set of samples that includes:
- Repeated timestamp arrays (when samples happened)
- Stacktrace indices that refer into shared tables (to avoid duplicating data and preserve relationships efficiently)
- Symbol mappings (so stack frames can be resolved to functions/files/lines)
- Explicit metadata about symbol fidelity, including an enum with values such as
SYMBOL_FIDELITY_UNSPECIFIEDandSYMBOL_FIDELITY_FULL, to indicate how complete the symbol information is
A subtle but important difference from “dumping a profile file somewhere” is that the OpenTelemetry profile schema bakes in integrity constraints. For example, it constrains relationships like timestamp arrays and stacktrace index arrays so they must match in length and remain consistent. That’s a big deal for interoperability: backends and converters can rely on structural invariants rather than guessing whether the incoming payload is malformed.
OTLP is the distribution channel
Once you have a standard profile model, you need a standard transport. OTLP Profiles uses OTLP—the same protocol OpenTelemetry uses for metrics, logs, and traces. That means profiling payloads can flow through the OpenTelemetry Collector and onward to vendor backends, where they can be converted into whatever storage or analysis format a backend prefers (for example, pprof on disk, or an internal profile store).
This “network-level representation vs. storage representation” split is deliberate. The OTLP profile proto is the interchange format; backends can still persist profiles in their preferred internal model. Grafana Pyroscope, for instance, has OTLP ingestion paths and conversion logic that adapt incoming data into the system’s storage and query internals.
Why this is different from existing profiling tools
OpenTelemetry Profiles is best understood as an attempt to fix the most persistent operational issues around production profiling—not to replace every profiler.
1) Vendor-neutral profiling data, fewer bespoke exporters
Without a shared wire format, teams tend to accumulate format converters and one-off exporters: “this agent outputs X; that backend wants Y.” Standardizing profiles as an OTLP signal reduces the need for bespoke plumbing, because tooling can agree on one representation for ingestion and interchange even if storage differs downstream.
2) First-class correlation with logs/metrics/traces
Profiles get dramatically more useful when you can connect them to context: what deployment, which request path, what time window, what trace spans were slow, what metric spiked. OpenTelemetry Profiles is designed to live in the same ecosystem as existing OpenTelemetry signals, making that kind of correlation a core expectation rather than a custom integration.
If you’ve already invested in OpenTelemetry for logs/metrics/traces, OTLP Profiles is essentially a way to extend that investment into performance forensics—without building a parallel data plane.
3) A path to low-overhead continuous profiling
The ecosystem work explicitly includes low-overhead continuous production profiling approaches. The OpenTelemetry eBPF profiler agent (in the opentelemetry-ebpf-profiler repository) is one example of a work-in-progress implementation: it loads eBPF programs, unwinds stacks, and reports captured profiles as the OTel profiling signal. The promise is that whole-system sampling can be collected with relatively small performance impact—making “always-on” profiling more viable.
Ecosystem traction and what shipped in the public alpha
OpenTelemetry Profiles entered public alpha on 2026-03-26. That matters because it marks the point where the community expects real users to test the signal, run it against real workloads, and help refine the shape of the spec and tooling.
According to the project materials, the alpha includes key building blocks:
- The OTLP profiles data format
- A native lossless pprof translator
- Semantic conventions
- A conformance checker
On the adoption side, the effort is described as collaborative, with involvement from Google, Datadog, Elastic, and open-source projects like Grafana Pyroscope adapting ingestion and storage paths to accept OTLP Profiles. This is the “boring infrastructure” that determines whether a standard sticks: ingestion, translation, conventions, and validation—not just a proto file.
For readers tracking adjacent observability plumbing, it’s worth situating Profiles alongside other “standards that unlock ecosystems” efforts. (If you want a parallel example in a different domain, see our explainer on What Is Chandra OCR — and How Layout‑Preserving OCR Actually Works: the theme is the same—shared representations reduce friction and expand interoperability.)
Why It Matters Now
The immediate catalyst is the public alpha milestone (Mar 26, 2026), which tells teams two things at once: the model is coherent enough to trial, and it’s still malleable enough that feedback can shape the end-state.
This comes at a time when operational teams increasingly need to explain performance regressions that don’t show up cleanly in metrics or intermittent slowdowns that traces alone can’t fully account for. Profiles are often the missing piece: they reveal “where the CPU went” or “which code paths got hot,” rather than just that something got slower.
What changes with OpenTelemetry Profiles is the cost of integrating profiling into the production observability loop. A donated agent direction (eBPF profiling), Collector integrations, and OTLP transport reduce the effort required to try continuous profiling without building custom pipelines. If you already use the OpenTelemetry Collector, adding a fourth signal is conceptually simpler than standing up a separate profiler ingestion stack.
(For more on the day’s broader theme of pragmatic engineering workarounds and platform shifts, see Today’s TechScan: EU Privacy Push, On‑Device ML Wins, and Clever Devtool Workarounds.)
Practical implications for engineering teams
- Faster root-cause analysis: Profiles can be examined alongside trace windows and metric spikes to identify hot functions and triggering conditions.
- Vendor portability: A single OTLP export path reduces lock-in pressure and makes multi-backend strategies more plausible.
- An alpha caveat: The signal is usable for experiments, but schemas and conventions may evolve. Plan pilots, validate with the conformance checker, and pay attention to symbol fidelity—because incomplete symbols can limit the usefulness of profiles even when sampling is “working.”
What to Watch
- Spec churn after alpha feedback: Follow OTEPs under
oteps/profilesand updates inopentelemetry-proto-profilefor schema refinements. - Collector and backend adoption: Watch for production-grade OTLP Profiles ingestion in open-source backends (including Pyroscope) and commercial vendors.
- Maturity of the eBPF profiler and runtime support: Improvements in stack unwinding and symbol fidelity will likely determine how broadly “low-overhead continuous profiling” works in real environments.
- Semantic conventions and conformance tooling: Widespread interoperability will depend on stable conventions for linking profiles to the rest of OpenTelemetry signals and a checker teams can trust in CI and production rollouts.
Sources: opentelemetry.io github.com deepwiki.com opentelemetry.io github.com github.com
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.