Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.
The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:
- LTTB downsampling runs as a compute shader
- Hit-testing for tooltips/hover is GPU-accelerated
- Rendering uses instanced draws (one draw call per series)
The result: 1M points at 60fps with smooth zoom/pan.
Live demo: https://chartgpu.github.io/ChartGPU/examples/million-points/
Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`
Happy to answer questions about WebGPU internals or architecture decisions.
some notes from a very brief look at the 1M demo:
- sampling has a risk of eliminating important peaks, uPlot does not do it, so for apples-to-apples perf comparison you have to turn that off. see https://github.com/leeoniya/uPlot/pull/1025 for more details on the drawbacks of LTTB
- when doing nothing / idle, there is significant cpu being used, while canvas-based solutions will use zero cpu when the chart is not actively being updated (with new data or scale limits). i think this can probably be resolved in the WebGPU case with some additional code that pauses the updates.
- creating multiple charts on the same page with GL (e.g. dashboard) has historically been limited by the fact that Chrome is capped at 16 active GL contexts that can be acquired simultaneously. Plotly finally worked around this by using https://github.com/greggman/virtual-webgl
> data: [[0, 1], [1, 3], [2, 2]]
this data format, unfortunately, necessitates the allocation of millions of tiny arrays. i would suggest switching to a columnar data layout.
uPlot has a 2M datapoint demo here, if interested: https://leeoniya.github.io/uPlot/bench/uPlot-10M.html
reply