back
July 18, 2023
|
Ecosystem

Unlocking new use cases: Quix and Confluent partnership

Explore the AI applications that you can build when connecting Quix with Confluent.

Four icons connected to one box in the center.

Python stream processing, simplified

Pure Python. No JVM. No wrappers. No cross-language debugging. Use streaming DataFrames and the whole Python ecosystem to build stream processing applications.

Python stream processing, simplified

Pure Python. No JVM. No wrappers. No cross-language debugging. Use streaming DataFrames and the whole Python ecosystem to build stream processing applications.

Data integration, simplified

Ingest, pre-process and load high volumes of data into any database, lake or warehouse, without overloading your systems or budgets.

The 4 Pillars of a Successful AI Strategy

Foundational strategies that leading companies use to overcome common obstacles and achieve sustained AI success.
Get the guide

Guide to the Event-Driven, Event Streaming Stack

Practical insights into event-driven technologies for developers and software architects.
Get the guide
Quix is a performant, general-purpose processing framework for streaming data. Build real-time AI applications and analytics systems in fewer lines of code using DataFrames with stateful operators and run it anywhere Python is installed.

Quix is excited to announce it has joined the Connect with Confluent partner program. This new program helps you accelerate the development of real-time applications through a native integration with Confluent Cloud. You now have the best experience for working with data streams within Quix, paving a faster path to powering next generation customer experiences and business operations with real-time data.

What kind of applications can you build when you connect your machine learning teams to real-time data streams in Kafka? Let’s take a look at some production use cases including two winners in Confluent’s inaugural Streaming Data Awards from Current 2022.

Optimise cellular networks with machine learning

Control won the Startup Award for their use of streaming data to build, test and serve a real-time machine learning pipeline on Quix.

The application monitors network performance to automatically optimise quality of service for each device connected to the cellular network.

The team were able to collect high quality data from vehicles, then train and test machine learning models, before serving them to production. The end result was a system that helps Control avoid up to 23% performance degradation.

"The lightbulb moment happened when we realised how resilient Quix is. We’ve automated a new product feature and Quix’s architecture gives us confidence it won’t fail."
Nathan Sanders, Technical Director and Founder of Control

Manage patient health with machine learning

Ademen won the Innovation Award for developing a smart stethoscope that uses high-frequency streaming data and real-time data processing to connect patients to remote doctors.

The app uses digital signal processing and data science models to analyse audio data in real-time. The pattern recognition technology is employed to determine information about the heart, lungs, bowel, and, in some instances, blood flow around the body, which is then served to the clinicians in a user-friendly interface.

The Ademen team is working with several clinical groups to reduce the time and cost of detecting conditions that would otherwise require patients to have X-ray or ultrasound scans.

"The Quix platform has been a key enabler for us to demonstrate our vision without the time, cost and risk of developing a streaming application in-house."
Dr. Alistair Foster, director of Ademen

Optimise manufacturing with machine learning

CloudNC are using streaming data and Python to maximise production capacity by building ‘digital twins’ of the factory. They use real-time data in a number of different ways, including:

  • Continually update factory schedules based on current machine performance
  • Predictive maintenance to prevent breakdowns
  • Real-time reaction to early warning signs
  • Optimizing how parts are created through machine learning

The real-time pipelines ingest data from computers running Linux on the factory floor using open source OPC-UA agents. The team cleans and processes this data with Python to count parts and label data for machine learning. They develop models offline before serving them back to the real-time ingestion pipelines.

"Quix has given us the environment to finally manage that [factory] information — to look at it, store it, or act on it immediately."
Chris Angell-Hicks, Chief Engineer for CloudNC

Real-time Generative AI with Kafka and a GPT-4 large language model

This blog from Confluent explores a compelling case for using ChatGPT and event streaming to build a production-ready large language model (LLM) chatbot. With the ability to process and analyse data streams in real-time, Kafka enables data teams to build the core elements of the application, including:

  • Integrating customer and company data into a knowledge base using Kafka Connect
  • Stream processing the knowledge base into a vector database
  • Building and serving prompts and embeddings
  • Calling LLM APIs

An event-driven architecture offers several advantages over a RESTful architecture when building large language model (LLM) applications:

Real-time responsiveness: Handle real-time data and respond promptly to events. With LLM applications, where users expect quick and interactive responses, an event-driven approach enables faster processing and immediate reactions to user inputs or external events.

Scalability and flexibility: Enable systems to handle a large number of concurrent events and adapt to varying workloads. This flexibility is crucial for LLM applications that may experience fluctuations in user traffic or data input.

Loose coupling and modularity: Components communicate through events rather than direct requests. This loose coupling enables better modularity, making it easier to develop, test, and maintain individual components of the LLM application independently.

Extensibility and integration: Facilitate seamless integration with other systems and services. LLM applications often require integration with various data sources, APIs, or external services. An event-driven approach simplifies these integrations, allowing the LLM application to consume and react to events from different sources efficiently.

Event sourcing and auditing: Adopt event sourcing, which captures and persists every event that occurs in the system. This event history enables auditing, debugging, and replaying of events, which can be valuable for LLM applications when analysing user interactions or improving model performance.

Overall, an event-driven architecture provides the necessary agility, responsiveness, scalability, and integration capabilities to effectively build and operate large language model applications, ensuring a smoother and more interactive user experience.

Conclusion

These are just a few examples of production applications that are made possible when you connect your machine learning teams to real-time data streams in Kafka. By combining the power of Confluent's industry-leading Kafka platform with Quix's F1-derived event streaming application framework, organisations can now unlock the full potential of the AI ecosystem. Try it yourself here.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Related content

Featured image for the "How to fix common issues when using Spark Structured Streaming with PySpark and Kafka" article published on the Quix blog
Ecosystem

How to fix common issues when using Spark Structured Streaming with PySpark and Kafka

A look at five common issues you might face when working with Structured Streaming, PySpark, and Kafka, along with practical steps to help you overcome them.
Steve Rosam
Words by
Featured image for the "Quix Streams, a reliable Faust alternative for Python stream processing " article published on the Quix blog
Ecosystem

Quix Streams—a reliable Faust alternative for Python stream processing

A detailed comparison between Faust and Quix Streams covering criteria like performance, coding experience, features, integrations, and product maturity.
Steve Rosam
Words by
The logos of Flink and Python
Ecosystem

Debugging PyFlink import issues

Solutions to a common issue that Python developers face when setting up PyFlink to handle real-time data.
Steve Rosam
Words by