The Quix blog

How to use the Quix APIs to connect thousands of devices
Bring any type of data into the Quix platform. Here’s how to use the APIs when the SDK doesn’t fit.

Three ways to get your data into Quix
When, why, and how to contribute code samples to the Quix library.

The Stream — July edition
The July edition of The Stream: covering this month in stream processing on the internet.

Your guide to contributing to the Quix library
When, why, and how to contribute code samples to the Quix library.

What you can do with the Quix SDK and why we developed it from scratch
Learn what you can do with Quix Streams, the Quix SDK, and why we dedicated more than two years building it.

How Ademen will use streaming data to revolutionize patient monitoring
Ademen aims to empower community clinical teams to screen for and monitor health problems using the smart stethoscope it built on streaming data with Quix.

The Stream — June edition
The June edition of The Stream: covering this month in stream processing on the internet.

To stream, or not to stream? A conversation about when to use (and how we built) stream processing systems
Behind the scenes building McLaren’s F1 stream processing system, and why the world needed a new stream processing client library.

Edge, fog and cloud computing: Where you process data matters
Computing in the cloud, in the fog or at the farthest edge can make a significant difference in technical applications that are processing large volumes of data at high speeds

How to build a no-code pipeline for sentiment analysis with our Snowflake connector
Three Quix connectors let you move data from Twitter to a Snowflake database while transforming it along the way. Learn how to set up the pipeline without writing any code.