In today's world of complex microservices and distributed architectures, understanding what's happening inside your application is no longer a luxury—it's a necessity. When things go wrong, a slow response time or a cryptic error can trigger a frantic scramble across siloed dashboards, logs, and metrics. You have metrics in Prometheus, but they only tell you what is slow. You have logs, but they lack the context of the entire request.
This is where the three pillars of observability—metrics, logs, and traces—come into play. While metrics and logs are crucial, distributed tracing is the connective tissue that ties everything together. It gives you the narrative of a request, from start to finish, across every service it touches.
At trace.do, we believe in Observability as Code. We make generating powerful, detailed traces as simple as wrapping a function. But the real power comes from integrating that data with the best-in-class tools you already know and love.
This post will show you how to combine the effortless, code-driven tracing of trace.do with the powerful metrics of Prometheus and the rich visualization of Jaeger to create a unified, open-standards-based observability stack.
Why combine these three tools? Because they solve different parts of the same problem, and together, they provide complete clarity.
trace.do (The Source of Truth): trace.do makes instrumentation a breeze. With a simple API, you generate rich, detailed trace data directly from your code. It answers the question: "What is the exact journey of this request and what happened along the way?" Because it's built on OpenTelemetry (OTel), this data is vendor-neutral and ready to be sent anywhere.
Prometheus (The Watchtower): Prometheus excels at aggregating time-series data. It scrapes metrics from your services and tells you about high-level trends. It answers the question: "Is my service healthy? What are my request rates, error percentages, and P99 latencies?" It's your early-warning system.
Jaeger (The Microscope): When Prometheus tells you there's a problem, Jaeger helps you find out why. It ingests trace data and provides a powerful UI to visualize the entire lifecycle of a request as a flame graph. It answers the question: "A specific request was slow, where exactly did the time go?"
By combining them, you create a seamless workflow: Prometheus alerts you to a problem, and Jaeger, using data generated by trace.do, shows you the root cause.
Everything starts with good data. trace.do simplifies instrumentation so you can focus on your business logic, not on configuring telemetry.
Imagine you have a function to process a customer order. Instrumenting it with trace.do is clean and declarative:
import { trace } from '@do/trace';
async function processOrder(orderId: string) {
// Automatically trace the entire function execution
return trace.span('processOrder', async (span) => {
span.setAttribute('order.id', orderId);
// The trace context is automatically propagated
const payment = await completePayment(orderId);
span.addEvent('Payment processed', { paymentId: payment.id });
await dispatchShipment(orderId);
span.addEvent('Shipment dispatched');
return { success: true };
});
}
This simple trace.span() wrapper automatically captures:
Because trace.do is built on OpenTelemetry, we can now configure our application to export this data to both Prometheus and Jaeger.
With your application already instrumented using trace.do, adding Prometheus metrics is a configuration step, not a code rewrite. The OpenTelemetry SDK can automatically generate key metrics (like latency and counts) from your trace spans.
Now, without any extra instrumentation, you can build Grafana dashboards that visualize metrics like:
You now have a high-level view of your system's health, all powered by the same trace.do instrumentation.
While Prometheus tells you that latency has spiked, Jaeger will show you where. Let's send our detailed span data to a Jaeger instance.
That's it! Now, the traces generated by your trace.span('processOrder', ...) call are sent directly to Jaeger.
When you open the Jaeger UI, you can search for traces with the order.id attribute you set and see a detailed flame graph showing the exact timing and relationship between processOrder, completePayment, and dispatchShipment. If dispatchShipment is the source of a delay, it will be immediately obvious.
Imagine this real-world scenario:
In minutes, you've gone from a vague alert to a precise root cause, all because your tools were working together on top of a foundation of clean, consistent data from trace.do.
Stop juggling disconnected tools and wrestling with complex instrumentation. By using trace.do as your "Observability as Code" foundation, you can feed a rich, standardized data stream into powerful open-source tools like Prometheus and Jaeger. This creates a robust, unified stack that helps you monitor, debug, and optimize your services faster than ever before.
Ready to gain complete clarity into your systems? Visit trace.do today and see how simple automated distributed tracing can be.