Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: produce OpenTelemetry traces with hs-opentelemetry #3140

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

develop7
Copy link
Collaborator

@develop7 develop7 commented Jan 4, 2024

This PR introduces producing OpenTelemetry traces containing, among others, metrics same as in ServerTiming header from before.

TODO:

Running:

I sort of gave up deploying and configuring all the moving bits locally, so you'd need to create the honeycomb.io account for this one (or ask me for the invite). After that, it's quite straightforward:

  1. Build PostgREST executable with stack build, and get its path with stack exec -- which postgrest
  2. Get a PostgreSQL server running (e.g. run nix-shell, then postgrest-with-postgresql-15 -- cat)
  3. Do the JWT dance (generate the JWT secret and encode the role into a token, e.g. postgrest_test_anonymous for the example DB from the above example)
  4. Run PostgREST server with
    OTEL_EXPORTER_OTLP_ENDPOINT='https://api.honeycomb.io/' \
    OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<honeycomb_api_key>"  \
    OTEL_SERVICE_NAME='PostgREST' OTEL_LOG_LEVEL='debug' OTEL_TRACES_SAMPLER='always_on' \
    PGRST_DB_URI='<postgresql_server_url>'  PGRST_JWT_SECRET='<jwt_secret>'  \
    /path/to/compiled/bin/postgrest
    
  5. request some data and check the honeycomb dashboard for the traces:

image

@steve-chavez
Copy link
Member

Awesome work! 🔥 🔥

I sort of gave up deploying and configuring all the moving bits locally, so you'd need to create the honeycomb.io account for this one

Found this Nix flake that contains an OTel GUI: https://flakestry.dev/flake/github/FriendsOfOpenTelemetry/opentelemetry-nix/1.0.1

I'll try to integrate that once the PR is ready for review.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 8c0e16a to 64a0ee9 Compare January 29, 2024 17:01
@develop7
Copy link
Collaborator Author

The recent problem I'm seemingly stuck with is hs-opentelemetry is using UnliftIO, which seems not quite composable with our (implicit, correct?) monad stack. So the deeper into the call stack the instrumented code is (the one I'm trying to wrap with inSpan), the more ridiculously complex it should be changed to be instrumented, i.e. https://github.com/PostgREST/postgrest/pull/3140/files#diff-5de3ff2b2d013b33dccece6ead9aeb61feffeb0fbd6e38779750511394cf9701R156-R157, up to the point I have no idea how to proceed further (e.g. wrapping App.handleRequests cases with their own spans, which is semantically correct)

There's a more straightforward MonadIO-involving opentelemetry library, with less activity and quite different approach to the telemetry data export (GHC eventlog → file/pipe by the GHC runtime). It looks less invasive approach, refactoring-wise, but requires more hoops to jump to actually deliver traces to Honeycomb/Lightstep/whatnot (pull eventlog → convert it to zipkin/jaeger/b3 → upload somewhere for analysis).

It also seems to boil down to the conceptual choice between online and offline traces' delivery-wise, or push and pull model.

@steve-chavez @wolfgangwalther @laurenceisla what do you think guys?

@steve-chavez
Copy link
Member

@develop7 Would vault help? It was introduced on #1988, I recall it helped with IORef handling.

It's still used on

jwtDurKey :: Vault.Key Double
jwtDurKey = unsafePerformIO Vault.newKey
{-# NOINLINE jwtDurKey #-}
getJwtDur :: Wai.Request -> Maybe Double
getJwtDur = Vault.lookup jwtDurKey . Wai.vault

I'm still not that familiar with OTel but the basic idea I had was to store these traces on AppState and export them async.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 6b891c2 to 586e7a1 Compare February 12, 2024 14:26
@steve-chavez
Copy link
Member

@develop7 Recently merged #3213, which logs schema cache stats to stderr. Perhaps that can be used for introductory OTel integration instead? It might be easier since the scache stats are already in IO space.

@develop7
Copy link
Collaborator Author

Would vault help?

hs-opentelemetry is using it already

basic idea I had was to store these traces on AppState and export them async

Not only that, you want traces in tests too, for one.

The good news is hs-opentelemetry-utils-exceptions seems to be just what we need, let me try it.

Perhaps that can be used for introductory OTel integration instead?

Good call @steve-chavez, thank you for the suggestion. Will try too.

@develop7
Copy link
Collaborator Author

image

it works!

@steve-chavez
Copy link
Member

steve-chavez commented Feb 21, 2024

Since now we have an observer function and Observation module

handleRequest :: AuthResult -> AppConfig -> AppState.AppState -> Bool -> Bool -> PgVersion -> ApiRequest -> SchemaCache ->
Maybe Double -> Maybe Double -> (Observation -> IO ()) -> Handler IO Wai.Response
handleRequest AuthResult{..} conf appState authenticated prepared pgVer apiReq@ApiRequest{..} sCache jwtTime parseTime observer =

data Observation
= AdminStartObs (Maybe Int)
| AppStartObs ByteString
| AppServerPortObs NS.PortNumber

Perhaps we can add some observations for the timings?

Also the Logger is now used like:

logObservation :: LoggerState -> Observation -> IO ()
logObservation loggerState obs = logWithZTime loggerState $ observationMessage obs

CmdRun -> App.run appState (Logger.logObservation loggerState))

For OTel, maybe the following would make sense:

otelState <- Otel.init

App.run appState (Logger.logObservation loggerState >> OTel.tracer otelState)) 

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from dc882f1 to 7794848 Compare February 23, 2024 15:44
@develop7
Copy link
Collaborator Author

Perhaps we can add some observations for the timings?

Agreed, server timings definitely belong there.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 7794848 to 398206b Compare February 23, 2024 16:04
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 398206b to 4cd99c6 Compare March 7, 2024 14:58
@develop7 develop7 requested a review from steve-chavez March 11, 2024 15:37
@develop7 develop7 marked this pull request as ready for review March 11, 2024 15:38
@develop7
Copy link
Collaborator Author

Okay, the PR is in the cooking for long enough, let's pull the plug and start small. Let's have it reviewed while I'm fixing the remaining CI failures.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 4cd99c6 to 94d2b9b Compare March 11, 2024 15:49
cabal.project Outdated Show resolved Hide resolved
@wolfgangwalther
Copy link
Member

hs-opentelemetry is, according to the repo, in alpha state. According to the TODO list above, the issue tracker and the repo description, it does not support:

  • GHC 9.8.x
  • Windows
  • Metrics or Logging

I don't think we depend on this in the current state. And we should certainly not depend on an even-less-maintained fork of the same.

So to go forward here, there needs to be some effort put into the upstream package first, to make it usable for us.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch 2 times, most recently from 590d142 to e809a65 Compare March 12, 2024 16:31
@develop7 develop7 marked this pull request as draft March 29, 2024 16:43
@develop7
Copy link
Collaborator Author

A status update:

  • GHC 9.8: hs-opentelemetry-sdk doesn't build against 9.8 because of hs-opentelemetry-exporter-otlpproto-lens chain. Given the upstream of the latter being bit unresponsive for the suggestions to bump upper bounds, I've managed to make the latter build for 9.8 in develop7/proto-lens@985290f, but haven't figured out how to pick it up to the project since it depends on the google's protobuf compiler installed and the protobuf's source checked out. Another approach is to not use hs-o-sdk and hs-o-e-otlp altogether, which I probably should've tried way before.

@wolfgangwalther
Copy link
Member

  • GHC 9.8: hs-opentelemetry-sdk doesn't build against 9.8 because of hs-opentelemetry-exporter-otlpproto-lens chain. Given the upstream of the latter being bit unresponsive for the suggestions to bump upper bounds, I've managed to make the latter build for 9.8 in develop7/proto-lens@985290f,

Hm. I looked at your fork. It depends on support for GHC 9.8 in ghc-source-gen. This repo has a PR, which just was updated 3 days ago. I wouldn't call that "unresponsive", yet. Once ghc-source-gen is GHC 9.8 compatible, you could open a PR to update bounds in proto-lens itself. But since the last release for GHC 9.6 support was in December... I would not expect this to take too long to get responded to. It certainly doesn't look like it's unmaintained.

I guess for GHC 9.8 support it's just a matter of time.

What about the other issues mentioned above? Were you able to make progress on those?

@mkleczek
Copy link
Contributor

mkleczek commented Apr 4, 2024

The recent problem I'm seemingly stuck with is hs-opentelemetry is using UnliftIO, which seems not quite composable with our (implicit, correct?) monad stack. So the deeper into the call stack the instrumented code is (the one I'm trying to wrap with inSpan), the more ridiculously complex it should be changed to be instrumented, i.e. https://github.com/PostgREST/postgrest/pull/3140/files#diff-5de3ff2b2d013b33dccece6ead9aeb61feffeb0fbd6e38779750511394cf9701R156-R157, up to the point I have no idea how to proceed further (e.g. wrapping App.handleRequests cases with their own spans, which is semantically correct)

There's a more straightforward MonadIO-involving opentelemetry library, with less activity and quite different approach to the telemetry data export (GHC eventlog → file/pipe by the GHC runtime). It looks less invasive approach, refactoring-wise, but requires more hoops to jump to actually deliver traces to Honeycomb/Lightstep/whatnot (pull eventlog → convert it to zipkin/jaeger/b3 → upload somewhere for analysis).

It also seems to boil down to the conceptual choice between online and offline traces' delivery-wise, or push and pull model.

@steve-chavez @wolfgangwalther @laurenceisla what do you think guys?

In my prototype I actually played with replacing HASQL Session with an https://github.com/haskell-effectful/effectful based monad to make it extensible:

https://github.com/mkleczek/hasql-api/blob/master/src/Hasql/Api/Eff/Session.hs#L37

Using it in PostgREST required some mixins usage in Cabal:

29b946e#diff-eb6a76805a0bd3204e7abf68dcceb024912d0200dee7e4e9b9bce3040153f1e1R140

Some work was required in PostgREST startup/configuration code to set-up appropriate effect handlers and middlewares but the changes were quite well isolated.

At the end of the day I think basing your monad stack on an effect library (effectful, cleff etc.) is the way forward as it makes the solution highly extensible and configurable.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from e809a65 to 4697009 Compare October 23, 2024 17:01
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 650d008 to ac33872 Compare October 31, 2024 13:40
@develop7
Copy link
Collaborator Author

develop7 commented Nov 4, 2024

Update: rebased the PR against latest master, updated hs-opentelemetry (with the Windows support merged!) & asked hs-opentelemetry maintainers to cut a new release in iand675/hs-opentelemetry#154 so we don't have to depend on forks again.

@develop7
Copy link
Collaborator Author

develop7 commented Nov 4, 2024

Building hs-opentelemetry from source introduces a build-time dependency of protoc, a protobuf compiler, so that's why Cabal & Stack builds fail

@steve-chavez
Copy link
Member

I still wonder if we could do this in a less invasive way 🤔. We added Prometheus metrics in a simple way by using our Observation module:

let observer = liftA2 (>>) (Logger.observationLogger loggerState configLogLevel) (Metrics.observationMetrics metricsState)

Ideally we would add an Otel module, add an Otel.traces function there and that would be it.

I'm not sure if this can be done entirely with hs-opentelemetry but in theory we would need a grpc dependency (format used by otel) plus an http client that does the work in a separate thread.

It seems better to maintain some of this functionality in-tree, specially since the dependency is not fully featured yet. That would also give us better control.

@develop7
Copy link
Collaborator Author

develop7 commented Nov 8, 2024

@steve-chavez this could be even more preferable since every observer invocation is accompanied by the tag, so I don't have to invent span labels. As long as observer is called in all/most interesting places, right?

I'm not sure if this can be done entirely with hs-opentelemetry

It seems so — I didn't sent any of these traces to honeycomb myself, so. I might only need to fiddle with the call stack so it won't have observer on top, but even that will do.

@develop7
Copy link
Collaborator Author

hs-opentelemetry maintainers released an update to 0.1.0.0 yesterday, and with that the protoc requirement is gone again and my windows-enabling contribution is in; that should also fix Nix builds as well since we don't need VCS dependencies again (well, for now)

@develop7
Copy link
Collaborator Author

Nix builds still fail because nixpkgs we're using do not have updated hs-opentelemetry-*. I'm on to update haskell-packages.nix accordingly.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch 2 times, most recently from 2c58608 to d821d86 Compare November 27, 2024 15:14
@develop7
Copy link
Collaborator Author

develop7 commented Dec 4, 2024

image bit more spans, is it enough (of course it isn't)?

@develop7
Copy link
Collaborator Author

develop7 commented Dec 4, 2024

Oh and all the checks are officially green

@develop7 develop7 marked this pull request as ready for review December 4, 2024 14:59
@develop7
Copy link
Collaborator Author

develop7 commented Dec 4, 2024

Enough is enough; let's have the feedback

Copy link
Member

@wolfgangwalther wolfgangwalther left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a certain slowdown noticeable in the loadtest reports. I guess that's expected, but can we do any better?

I didn't look at the code at all, I'll leave that to @steve-chavez.

nix/overlays/haskell-packages.nix Outdated Show resolved Hide resolved
cabal.project.freeze Outdated Show resolved Hide resolved
postgrest.cabal Show resolved Hide resolved
stack-21.7.yaml Outdated Show resolved Hide resolved
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch 2 times, most recently from dc4e9a7 to 6afc7cd Compare December 5, 2024 12:37
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 6afc7cd to c065a27 Compare December 5, 2024 12:42
@develop7
Copy link
Collaborator Author

develop7 commented Dec 5, 2024

There is a certain slowdown noticeable in the loadtest reports. I guess that's expected, but can we do any better?

There is, and it's noticeable indeed, and it's not expected by me particularly: I'm assuming unconfigured oTel should be no-op or close to it. Let me check their defaults.

Update reference memory usage accordingly
@develop7
Copy link
Collaborator Author

Ideally we would add an Otel module, add an Otel.traces function there and that would be it.

@steve-chavez without fiddling with CallStack this would report incorrect code locations, just the way it does at the moment.

Let me check their defaults.

@wolfgangwalther the tracing was enabled by default so I've disabled it for memory tests, shaving off a ~megabyte per test.

@steve-chavez
Copy link
Member

steve-chavez commented Dec 17, 2024

Ideally we would add an Otel module, add an Otel.traces function there and that would be it.

Hm, besides code maintenance, the way I suggested would also keep the tracing out of the fast path. As it is, we lose much perf (ref) which doesn't seem worth it.

@steve-chavez without fiddling with CallStack this would report incorrect code locations, just the way it does at the moment.

The Observations could contain the original lines as an extra field, and I guess we could send these to the OTel server?

@develop7
Copy link
Collaborator Author

develop7 commented Dec 18, 2024

As it is, we lose much perf (ref) which doesn't seem worth it.

To be fair, OTel adds up only 5% on top of another 10% rate decrease introduced in main branch (according to latest loadtest report)

The Observations could contain the original lines as an extra field, and I guess we could send these to the OTel server?

that's one way to address it indeed; it's just 3rd-party tools would most likely be confused since they expect a well-known attribute name.

@wolfgangwalther
Copy link
Member

another 10% rate decrease introduced in main branch (according to latest loadtest report)

IIRC, those were specific to the root endpoint requests - but since we don't split the loadtest data by endpoint, you can't easily tell that. So not critical. But the OTel stuff is adding it to every request, right?

@develop7
Copy link
Collaborator Author

But the OTel stuff is adding it to every request, right?

@wolfgangwalther yes, it did by default. Disabling it in loadtest completely (c8c2d29) decreases performance hit to 3.1%.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants