back
January 25, 2023
|
Industry insights

The Stream — January 2023 edition

How can you send time-series data to Apache Kafka using Python and Pandas? Plus Apache Flink news, memes, and meetups

The Stream January 2023 banner.

Python stream processing, simplified

Pure Python. No JVM. No wrappers. No cross-language debugging. Use streaming DataFrames and the whole Python ecosystem to build stream processing applications.

Python stream processing, simplified

Pure Python. No JVM. No wrappers. No cross-language debugging. Use streaming DataFrames and the whole Python ecosystem to build stream processing applications.

Data integration, simplified

Ingest, pre-process and load high volumes of data into any database, lake or warehouse, without overloading your systems or budgets.

The 4 Pillars of a Successful AI Strategy

Foundational strategies that leading companies use to overcome common obstacles and achieve sustained AI success.
Get the guide

Guide to the Event-Driven, Event Streaming Stack

Practical insights into event-driven technologies for developers and software architects.
Get the guide
Quix is a performant, general-purpose processing framework for streaming data. Build real-time AI applications and analytics systems in fewer lines of code using DataFrames with stateful operators and run it anywhere Python is installed.

How to send tabular time series data to Apache Kafka with Python and Pandas

We put together a tutorial for software engineers to send data to Kafka with Python. This just uses OSS elements and should take 30 minutes or so. By the end of the tutorial, you'll understand:

  • Why startups and online businesses use Apache Kafka
  • The unique qualities of time series data and how it works with Kafka
  • How to install and run Kafka on your local machine
  • How to send time series data to Kafka in batches using Python and the Pandas library

Dig into the tutorial

What Confluent's acquisition of Immerok means for the future of real-time

Confluent has acquired Immerok, a start-up serving Apache Flink customers. Confluent's CEO Jay Kreps said, "Stream processing enables organizations to clean and enrich data streams to derive actionable insights from their data in real-time. Our planned acquisition of Immerok will accelerate our ability to bring one of the most popular and powerful stream processing engines directly into Confluent"

There are some interesting hot takes on Twitter including this thread by Matthias J Sax, the technical lead of Kafka Streams, as well as this blog from Yaroslav Tkachenko. Exciting times for those building streaming applications!

Read Confluent's blog post

More news and insights

  • How Uber adopted the Kappa Architecture to improve developer productivity - Read more
  • Problems We Face for Sending Events to the Secure Kafka in Python - Read more
  • Upcoming Stream Processing meetups in Berlin (31/1) and Munich (7/2)

Meme of the Month

Man quoting with both hands meme.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Related content

Banner image for the article "Are data historians getting in the way of Industry 4.0?" published on the Quix blog
Industry insights

Are data historians getting in the way of Industry 4.0?

Learn how data historians impact Industry 4.0 adoption, understand their limitations and discover alternative approaches to managing data from OT systems.
Mike Rosam
Words by
Banner image for the article "Rethinking Build vs Buy" published on the Quix blog
Industry insights

The challenges of processing data from devices with limited connectivity and how to solve them

Need to process data from frequently disconnected devices? Better use an event streaming platform paired with a powerful stream processing engine. Here's why.
Mike Rosam
Words by
Banner image for the article "Rethinking Build vs Buy" published on the Quix blog
Industry insights

Rethinking “Build vs Buy” for Data Pipelines

“Build vs buy” is outdated — most companies need tools that provide the flexibility of a build with the convenience of a buy. It’s time for a middle ground.
Mike Rosam
Words by