skip to content
busstop.dev

Thoughts on Stdout-Based Observability

/ 4 min read

I’ve been thinking lately about something that’s been bugging me: how we handle telemetry data in our applications. Right now, we spend a lot of time configuring where logs should go, how metrics should be exported, and where traces should be sent. What if there was a simpler way?

The Current Telemetry Configuration Problem

You’ve probably seen this pattern: each application needs its own configuration for observability. We set up logging frameworks, configure metrics exporters, wire up tracing SDKs, and manage credentials for various monitoring services. When you deploy the same application in different environments, you often need different configurations for each one.

This creates friction. Developers have to think about infrastructure concerns when they just want to instrument their code. Moving between development, staging, and production environments often means juggling different telemetry configurations. Onboarding new services means recreating the same observability setup over and over.

What If We Used Stdout for Structured Telemetry?

Here’s something I’ve been mulling over: what if applications could emit structured telemetry data directly to stdout, and let the hosting environment decide what to do with it? Instead of configuring log shippers, metrics exporters, and trace collectors inside our applications, we could just emit the data and let the infrastructure handle the routing.

This feels similar to what log processors (e.g., fluentd) in kubernetes have been doing for a while now. I don’t see why we can’t do this for all telemetry data. All we need is a standard format…

But didn’t opentelemetry already define one? I wonder if I could send stdout data to an otlp exporter… I’ll probably try this on the project I’m working on now and write another post if I can get it to work.

What appeals to me about this model is its graceful degradation: if the hosting environment doesn’t have any destination for a particular telemetry type, that data just gets dropped. No crashes, no configuration headaches, no deployment blockers.

Potential Developer Experience Benefits

The more I think about this, the more it just makes sense to me. The separation of concerns feel natural. My hosting environment handles the complexity of routing telemetry data to appropriate destinations, applying policies, and enforcing rules. And I could focus on the core question: “What observability data does my application need to emit?”

A service instrumented this way will run without any config changes in a development environment where telemetry gets dropped, a Docker container with basic log collection, or a sophisticated Kubernetes cluster with full observability stacks. The hosting environment simply takes what it can and drops what it can’t.

One downside I think is that it creates a lot of noise in stdout when running locally. And this is another thing that I have been thinking about as well - what if there is a separate “stdout” for machine readable data and human readable data? My app could be emitting telemetry in the background but I can still see status messages that are meant for me, the human. But that’s probably a post for another day.

Closing Thoughts

This approach might lead to applications that are easier to develop, deploy, and operate, with observability that works regardless of where the code runs. Instead of configuring telemetry destinations in every application, we’d configure them once at the infrastructure level.

I’m curious what others think about this approach. Have you run into similar frustrations with telemetry configuration? Does the idea of stdout-based telemetry emission seem promising or problematic? Are there technical challenges I’m not considering?


These are just some thoughts on how application observability might evolve. The idea is about simplifying telemetry emission by letting infrastructure handle the routing, so developers can focus on building great software without worrying about where their observability data ends up.