Quix Cloud

A Python-native platform for streaming, storing, and processing engineering test data across your entire development lifecycle—from MIL simulation through vehicle testing. Engineers build data workflows in Python while Quix handles the infrastructure, deployment, and observability.

Product features:

The Control Plane gives you a single interface to manage all your data projects, teams, and infrastructure—even when they're running on completely separate networks or cloud accounts. Your teams work independently in their own environments, but you maintain oversight of who has access to what, how resources are being used, and which applications are running in production. It's built for engineering organizations where different teams need autonomy to move fast, but leadership needs confidence that critical systems are properly governed and production environments are protected.

Key benefits:

  • Enable teams without losing control - Engineers build and deploy autonomously while you maintain governance over access, resources, and production releases
  • Manage complexity at scale - Coordinate multiple projects across distributed environments from development through production without manual tracking
  • Deploy where you need - Run applications in isolated networks, different clouds, or on-premises while managing everything centrally

Fully managed Kubernetes infrastructure where your applications actually run—in the cloud, on-premise, or hybrid. Deploy to shared serverless infrastructure for development, dedicated isolated clusters for production workloads, or your own cloud account (BYOC) for compliance requirements. Completely managed infrastructure, so your team focuses on building applications instead of managing Kubernetes clusters.

Key benefits:

  • Deploy production applications without Kubernetes expertise — Quix handles cluster provisioning, scaling, monitoring, and maintenance so engineers build data pipelines instead of managing infrastructure
  • Run workloads where your data needs to be — Deploy to any cloud provider, your own data centers, or on-premise infrastructure to meet compliance, sovereignty, or performance requirements
  • Isolate sensitive and high-performance workloads in dedicated clusters — Separate production from development with dedicated resources, custom network configurations, and SLA-backed infrastructure for critical systems

Serverless Private Compute lets you deploy Docker containers into dedicated Kubernetes node pools without touching Kubernetes configuration. Your team gets ephemeral, on-demand compute environments for development and production workloads—spin them up when needed, tear them down when done. The infrastructure runs in isolated private node pools, giving you the security and performance of dedicated resources with the convenience of serverless deployment. Engineers deploy containers directly from their projects without waiting for IT to provision infrastructure or learning Kubernetes operations.

Key benefits:

  • Self-serve deployment - Engineers deploy Docker containers instantly without Kubernetes expertise or IT tickets
  • Ephemeral environments - Spin up isolated compute for development, testing, or production and tear down when finished
  • Private isolation - Run workloads on dedicated node pools separated from multi-tenant infrastructure for security and performance

Fully Managed Kafka provides multi-tenant Kafka clusters that are ready to use immediately, with optional public internet exposure secured by token-based authentication. Engineers self-serve access to Kafka without waiting for infrastructure teams—Quix provisions clusters, manages broker configuration, handles scaling, and maintains uptime. Multiple teams can share a single cluster safely, or Quix can provision dedicated clusters for specific needs. The infrastructure is managed end-to-end, so teams focus on building data pipelines instead of operating message brokers.

Key benefits:

  • Instant Kafka access - Provision and share managed Kafka clusters without infrastructure setup or operational overhead
  • Secure public exposure - Optionally expose Kafka on the internet with token-based authentication for external integrations
  • Zero broker management - Quix handles cluster health, scaling, and maintenance while teams focus on data pipelines

Bring Your Own Broker lets you connect Quix to your organization's existing Kafka setup instead of using managed clusters. If you've already invested in Confluent Cloud, self-hosted Kafka, or internal message broker infrastructure, Quix integrates with it directly. You maintain full control over your Kafka environment—configuration, access policies, network isolation—while getting Quix's pipeline development tools, monitoring, and deployment capabilities on top. Engineers build pipelines in Quix that read and write to your existing topics without vendor lock-in or forced migration.

Key benefits:

  • Preserve existing investments - Use your current Kafka infrastructure without migration or duplication costs
  • Maintain control - Keep your existing broker configuration, access policies, and network security unchanged
  • Flexible integration - Work with Confluent Cloud, self-hosted Kafka, or any Kafka-compatible broker

Service Level Agreements provide performance guarantees for Quix infrastructure and support response times. Business tier gets 99.9% uptime with 9/5 support response, while Enterprise tier provides 99.99% uptime with 24/7 support coverage. These SLAs ensure your production data pipelines run reliably with defined support commitments when issues arise. Quix monitors infrastructure health, responds to incidents within agreed timeframes, and maintains the reliability your organization needs for mission-critical test data systems.

Key benefits:

  • Production reliability - Guaranteed 99.9% or 99.99% uptime ensures business-critical pipelines stay operational
  • Committed support - Defined response times (9/5 or 24/7) for issue resolution and incident management
  • Risk mitigation - Service commitments provide confidence for production deployments and compliance requirements

Bring Your Own Cloud deploys Quix's data plane infrastructure directly into your AWS, Azure, or GCP account. You get dedicated, private infrastructure that never touches shared environments—complete isolation for security, compliance, or performance requirements. Quix manages the software layer while the infrastructure runs in your VPC on your subscription. This gives you full visibility into costs, network traffic, and resource usage while maintaining the simplicity of managed services. Perfect for organizations with strict data sovereignty, security policies, or need infrastructure-level control.

Key benefits:

  • Complete isolation - Run on dedicated infrastructure in your cloud account, separate from all other tenants
  • Enhanced security - Meet strict compliance requirements with infrastructure that stays within your network boundaries
  • Cost transparency - See exact infrastructure costs on your cloud bill with full control over resource allocation

Grafana Dashboards provide ready-made visualizations for monitoring your Quix infrastructure and data pipelines. These dashboards connect to your cluster's metrics automatically, showing CPU usage, memory consumption, Kafka throughput, consumer lag, and deployment health in real-time. Instead of building monitoring from scratch, you get production-ready dashboards that surface the metrics that matter. Customize them to add your own panels or use them as-is to track system performance, troubleshoot issues, and ensure your data platform is running optimally.

Key benefits:

  • Immediate visibility - Start monitoring cluster metrics without building dashboards or configuring data sources
  • Production-ready monitoring - Get pre-configured visualizations for CPU, memory, Kafka metrics, and deployment health
  • Customizable views - Extend default dashboards with your own panels and metrics while keeping core monitoring intact

Connectors are composable source and sink templates that let you stream live test data from rigs, fleets, and databases directly into Quix — or pipe processed results back out. They handle the heavy lifting: buffering, retries, threading, and lifecycle control. You get reliable, extensible interfaces designed for real-time industrial and R&D environments.

Key benefits:

  • Faster integration - Integrate test benches, data loggers, and databases using consistent, production-ready APIs
  • Built-in features - Avoid data loss with built-in backpressure handling, buffering, and graceful restarts
  • Clone and customize - Extend easily to support any proprietary industrial protocol or internal data pipeline

An open-source Python library for building real-time data pipelines on Kafka using a Pandas-like DataFrame API. Provides built-in operators for industrial data challenges: tumbling windows for time-aligning heterogeneous streams, graceful periods for late-arriving configuration data, stateful aggregations with automatic checkpointing, and backpressure-aware sinks that batch database writes to prevent overload.

Key benefits:

  • Familiar Pandas-like API for real-time processing - Transform streaming data using DataFrame syntax engineers already know, eliminating months of Kafka training
  • Automatic handling of late and out-of-order data - Windowing operators consolidate data from multiple sources with different arrival times into synchronized records
  • Built-in backpressure prevents database overload - Sinks automatically batch writes and retry failed commits, ensuring reliable ingestion without manual tuning

Auto Scaling elastically adjusts compute resources for your data pipelines based on actual load. Set maximum limits for replicas, CPU, and memory, and Quix scales deployments up during high-volume periods and down when load decreases. This handles bursty workloads automatically—like when aircraft land or test rigs start generating data—without over-provisioning resources that sit idle. Your pipelines get the compute they need when they need it, while you only pay for resources actually used.

Key benefits:

  • Handle traffic spikes - Automatically scale up when data volume increases, scale down when it decreases
  • Cost optimization - Pay only for resources actually used instead of over-provisioning for peak loads
  • Zero manual intervention - Set scaling limits once and let the system adjust resources automatically based on demand

Real-Time Data Metrics give you centralized visibility into data flowing through your pipelines. Track message throughput, processing rates, and data volume across all deployments and topics from a single view. See which pipelines are active, identify bottlenecks, and spot issues before they impact downstream systems. Metrics update in real-time as data moves through your platform, so you can verify pipelines are processing correctly and troubleshoot problems quickly when metrics deviate from expected patterns.

Key benefits:

  • Centralized visibility - Monitor all pipeline metrics from one place instead of checking individual deployments
  • Early problem detection - Spot throughput issues, processing delays, or pipeline failures before they cascade
  • Optimization insights - Identify bottlenecks and performance patterns to improve pipeline efficiency

Lineage tracks how data moves through your pipelines, showing which applications consume from which topics, what transformations happen at each step, and where data ultimately ends up. This visibility helps you understand dependencies between applications, debug data quality issues by tracing problems back to their source, and document data flow for compliance requirements. When something breaks, lineage shows exactly what's upstream and downstream, so you can assess impact and fix the right component.

Key benefits:

  • Fast debugging - Trace data issues back to their source by following the complete path through your pipeline
  • Understand dependencies - See which applications depend on each data source before making changes
  • Compliance documentation - Document data transformations and flow for audit and certification requirements

Medallion Data Tiers apply the medallion architecture pattern to test data, categorizing data by readiness and quality. Bronze holds raw, unprocessed data exactly as collected from sensors and test rigs. Silver contains cleaned, validated data with transformations applied. Gold represents analytics-ready data—enriched with configurations, aggregated, and prepared for analysis. This organization makes data quality explicit, improves communication between teams about what data is ready for which purposes, and provides clear boundaries for where different types of processing happen.

Key benefits:

  • Clear data maturity - Categorize data by processing level so teams know exactly what they're working with
  • Improved communication - Establish common language for data readiness across engineering and test teams
  • Flexible reprocessing - Keep raw bronze data unchanged while iterating on silver and gold transformations

Projects organize your data platform work into self-contained units, each backed by a single Git repository. A project contains all applications, topic definitions, and configuration for a logical grouping of work. Each project can have multiple environments (dev, staging, production) mapped to Git branches, giving you standard software development workflows for data pipelines. IT teams provision resources at the project level, engineers develop features in branches before merging, and Git provides complete revision history for all changes.

Key benefits:

  • Standard dev workflows - Use Git branching, pull requests, and version control for data pipeline development
  • Organized infrastructure - Group related pipelines, topics, and configuration into logical projects
  • Complete history - Track all changes to pipelines and configuration with full Git revision history

Environments let you manage different deployment stages within a project. Create separate development, staging, and production environments, each with its own infrastructure, topics, and deployed applications. Environments map to Git branches, so your dev environment tracks your development branch while production tracks main. This isolation lets engineers experiment safely in development, validate changes in staging, and deploy confidently to production—all within the same project structure. Changes flow through environments via Git merges and synchronization.

Key benefits:

  • Safe experimentation - Develop and test in isolated environments without risking production systems
  • Staged deployment - Validate changes in staging before promoting to production with confidence
  • Environment parity - Maintain consistent pipeline structure across dev, staging, and production environments

Scratchpads are ephemeral environments for rapid iteration and testing. Spin up a temporary environment, experiment with real production data, try out new approaches, and delete it when done—without creating branches, affecting other environments, or cluttering your project. Scratchpads run completely isolated from development, staging, and production, giving you a safe sandbox for prototyping transformations, testing connectors, or debugging issues with live data. They're perfect for those 'let me try something' moments that don't need formal version control.

Key benefits:

  • Risk-free testing - Experiment with real production data in complete isolation from all other environments
  • Fast iteration - Spin up temporary environments instantly without branching or formal environment setup
  • No cleanup needed - Delete scratchpads when done without worrying about orphaned resources or configuration

Project Links enable data sharing across project boundaries. Connect topics from one project as inputs to another, creating cross-project data flows while maintaining project isolation. A graphical view shows these connections, making it easy to see which teams consume your data and what upstream sources you depend on. This enables collaboration between teams working in separate projects while keeping project boundaries clear. Teams maintain autonomy over their own projects while safely consuming data produced by other teams.

Key benefits:

  • Cross-team collaboration - Share data between projects without merging codebases or losing project boundaries
  • Dependency visualization - See which projects consume your data and what external sources you rely on
  • Safe data sharing - Connect to other projects' topics without giving them access to your project internals

Synchronization keeps your Quix environments aligned with your Git repository in one operation. Changes to YAML files, deployment configurations, or topic definitions in Git get applied to the corresponding environment with a single sync command. Your Git repo becomes the source of truth—what's in the main branch is what runs in production. This eliminates configuration drift between environments, ensures consistent deployments across staging and production, and makes rollbacks as simple as reverting a Git commit and re-syncing.

Key benefits:

  • Single source of truth - Git repository defines exactly what runs in each environment, eliminating configuration drift
  • One-command deployment - Sync all changes from Git to an environment in a single operation
  • Easy rollbacks - Revert Git commits and re-sync to roll back deployments and configuration changes

Pipeline View provides a visual representation of your data integration architecture. See how applications connect through topics, which pipelines are actively processing data, and what the topology of your data flows looks like. Applications appear as tiles, topics as connecting lines, with color coding showing active vs. inactive streams. This graphical view makes it easy to understand pipeline structure at a glance, spot which applications consume or produce specific data, and share pipeline architecture with stakeholders who need visibility into how data moves through your system.

Key benefits:

  • Understand structure - See your entire pipeline topology and data flows in a single visual interface
  • Monitor status - Identify which pipelines are active and processing data versus sitting idle
  • Enable collaboration - Share visual pipeline diagrams with engineers and stakeholders for better communication

Containerised Apps let you deploy any application that runs in Docker. Write your Dockerfile, define dependencies, and deploy to Quix infrastructure at whatever horizontal and vertical scale you need. You have full control over the container image—edit the Dockerfile to install system packages, modify the base image, or configure runtime settings. Any Docker-compatible application works, whether it's Python, Java, Go, or custom binaries. Applications scale independently with their own resource allocations and replica counts.

Key benefits:

  • Deploy anything - Run any Docker-compatible application, not just applications built with Quix frameworks
  • Full Docker control - Edit Dockerfiles to install dependencies, customize runtime, or optimize images
  • Independent scaling - Allocate CPU, memory, and replicas per application based on specific requirements

Topics Management provides a self-service interface for Kafka topic configuration. Engineers create topics, adjust partition counts, set retention policies, and modify replication factors without submitting IT tickets or touching broker configuration directly. This puts topic management in the hands of teams building pipelines, while maintaining proper defaults and guardrails. Teams iterate quickly on topic configuration as their pipelines evolve, enabling a self-serve Kafka platform where data engineers control their own infrastructure.

Key benefits:

  • Self-serve topics - Create and configure Kafka topics without IT tickets or broker access
  • Full configuration control - Adjust partitions, retention, replication, and policies as pipeline needs evolve
  • Faster iteration - Modify topic settings immediately when requirements change instead of waiting for approvals

Deployment Management provides straightforward controls for managing application deployments. Start, stop, restart, or delete deployments as needed for development, troubleshooting, or operational tasks. Edit deployment settings like resource allocation, environment variables, or scaling parameters without redeploying from scratch. These controls eliminate the complexity and error-prone nature of manual deployment operations—you manage applications through a clear interface rather than kubectl commands or configuration files. Operations that would be risky or complex become simple, repeatable actions.

Key benefits:

  • Simple operations - Start, stop, and manage deployments through clear controls instead of command-line tools
  • Safe changes - Modify deployment settings without manual configuration editing or complex procedures
  • Quick troubleshooting - Restart problematic deployments or roll back changes easily when issues arise

Network Configuration lets you control how deployments connect to each other and expose services externally. Define internal ports for service-to-service communication within your infrastructure, configure public URLs when applications need external access, and set up target ports for container networking. This simplifies what would otherwise require Kubernetes networking knowledge—you specify how applications should be reachable, and Quix handles the underlying ingress, service meshes, and routing. Engineers configure network connectivity through straightforward settings rather than YAML manifests or networking primitives.

Key benefits:

  • Simplified networking - Configure connectivity without Kubernetes networking expertise or complex configuration
  • Flexible access - Set up internal communication between services and public access for external integrations
  • Clear endpoints - Define URLs and ports declaratively so network topology is explicit and documented

State Management enables stateful deployments backed by persistent volumes. Applications that need to maintain state—like databases, caches, or applications with local checkpoints—get Kubernetes PersistentVolumeClaims automatically configured and attached. When containers restart, state persists. This eliminates a major pain point in running stateful workloads on streaming infrastructure—you can deploy applications that require persistence without becoming a Kubernetes storage expert. Configure volume size and mount paths, and Quix handles provisioning, attachment, and lifecycle management.

Key benefits:

  • Stateful applications - Run databases, caches, and stateful processors with persistent storage automatically configured
  • Survive restarts - Application state persists across container restarts, redeployments, and scaling operations
  • Simple configuration - Specify volume requirements without managing PersistentVolumes, StorageClasses, or Kubernetes storage details

Environment Variables provide flexible application configuration without code changes. Inject input/output topic names, reference secrets securely, and define custom variables that containers read at runtime. Variables can differ between environments—use test topics in development and production topics in production, all from the same application code. This separates configuration from code, enables reusable application templates across teams, and makes it easy to adapt applications to different contexts by changing variables instead of modifying source code.

Key benefits:

  • Flexible configuration - Customize application behavior per environment without code changes
  • Secure secrets - Reference sensitive credentials through variables instead of hardcoding in containers
  • Reusable applications - Deploy the same container with different configurations across projects and teams

App Templates provide pre-built, working applications for common use cases. Start with templates for REST APIs, machine learning models, simulation runners, data visualization dashboards, or ETL patterns instead of building from scratch. Each template includes working code, proper error handling, configuration examples, and documentation. You can deploy templates as-is for standard use cases or customize them for specific requirements—they serve as both starting points for new work and reference implementations showing best practices.

Key benefits:

  • Faster development - Start with working applications instead of building common patterns from scratch
  • Consistent patterns - Use proven implementations that follow best practices for error handling and configuration
  • Learning resource - Study template code to understand how to build different types of applications properly

Private App Templates let enterprises create internal template libraries with organization-specific applications, connectors, and patterns. Codify your team's best practices, compliance requirements, and standard architectures into templates that engineers can deploy and customize. Unlike public templates, private templates contain your proprietary logic, approved integrations, and organizational conventions. This enables self-service while ensuring consistency—engineers move fast using approved patterns rather than reinventing or making architecture decisions from scratch each time.

Key benefits:

  • Organizational standards - Codify best practices, security patterns, and approved architectures into reusable templates
  • Faster compliance - Engineers start with pre-approved templates instead of building from scratch and requesting reviews
  • Knowledge sharing - Distribute working implementations of internal patterns across teams and projects

Services provide instant access to common technology infrastructure for development. Spin up databases, Redis caches, message gateways, API routers, or monitoring tools as containerized services without asking IT to provision production infrastructure. These services are meant for development and prototyping—quick access to the technologies you need to build and test pipelines locally before moving to production-grade infrastructure. Engineers prototype solutions with real technology stacks instead of mocking dependencies or waiting for infrastructure provisioning.

Key benefits:

  • Rapid prototyping - Spin up databases, caches, and services instantly for development without IT tickets
  • Complete stack - Test with real technology dependencies instead of mocks or placeholder services
  • Development-focused - Quickly spin up and tear down services for iteration without production infrastructure costs

Multi Language Support enables development in the language your team prefers. Use Python or C# SDKs with Quix-specific features like StreamingDataFrame, or use any programming language via standard Kafka clients. This flexibility means engineers work in languages they know—data scientists use Python, .NET teams use C#, and Java developers can use Kafka clients directly. No forced language adoption or team-wide retraining. Applications in different languages work together seamlessly through Kafka topics, so each team uses the right tool for their work.

Key benefits:

  • Language freedom - Build applications in Python, C#, Java, Go, or any language with Kafka support
  • Team flexibility - Different teams use different languages based on expertise and requirements
  • Interoperability - Applications in different languages communicate seamlessly through Kafka topics

The Build and Deploy Service automates the path from code to running applications. Push code to Git, and the service automatically builds Docker containers, runs tests, and deploys to your environment. No manual docker build commands, no kubectl apply steps, no CI/CD configuration to write. The service handles building containers with proper caching, pushing images to registries, and rolling out deployments with zero-downtime strategies. Engineers focus on code while the infrastructure handles builds, tests, and deployment orchestration.

Key benefits:

  • Automated deployment - Code changes automatically build, test, and deploy without manual steps
  • Zero-downtime updates - Rolling deployments update applications without service interruption
  • Built-in CI/CD - Get continuous integration and delivery without configuring separate pipeline tools

Bring Your Own Container lets you deploy existing Docker images directly to Quix infrastructure. If you have pre-built images in DockerHub, private registries, or artifact repositories, reference them directly without rebuilding from source. This works for third-party containers, legacy applications, or images built by external CI/CD systems. You maintain your existing build processes and image management while running containers on Quix infrastructure. Perfect for deploying vendor applications, integrating with existing DevOps workflows, or using containers built outside Quix.

Key benefits:

  • Use existing images - Deploy pre-built containers from any registry without rebuilding
  • External CI/CD integration - Run images built by your existing build systems on Quix infrastructure
  • Third-party containers - Deploy vendor applications or community images alongside your custom code

The Quix CLI brings Quix development to your local machine. Develop applications locally, test pipelines using Docker Desktop, and deploy to Quix Cloud from your terminal. The CLI replicates the cloud environment locally—run multiple applications, connect them through topics, and verify behavior before deploying. This enables familiar development workflows: edit code in your IDE, test locally, commit to Git, and deploy via CLI or CI/CD. No switching between local development and cloud interfaces—code, test, and deploy from your command line.

Key benefits:

  • Local development - Build and test pipelines on your machine using Docker before deploying to cloud
  • IDE integration - Use your preferred development tools with Quix CLI handling deployment and testing
  • CI/CD automation - Integrate Quix commands into any CI/CD pipeline for automated deployments

Online Code Editors provide a full Python development environment in your browser. Edit application code, get IntelliSense code completion, access Quix library documentation, and test changes without installing anything locally. The editor connects directly to your projects—modify deployed applications, create new ones, or experiment with examples all from the browser. This enables quick iterations and makes it easy for team members to review code, make small fixes, or prototype ideas without cloning repositories or setting up development environments.

Key benefits:

  • No local setup - Edit code and develop applications from browser without installing tools
  • IntelliSense support - Get code completion, documentation, and syntax highlighting for Python
  • Quick iterations - Make changes, test, and deploy without leaving the browser

Dev Containers provide reproducible development environments that match your production configuration. Define your development container with all dependencies, tools, and settings needed for working on Quix projects. Other team members open the same dev container definition and get an identical environment—no 'works on my machine' problems. Dev Containers work with VS Code and other IDEs supporting the dev container standard, giving you consistent development environments locally while ensuring code runs the same way in production.

Key benefits:

  • Consistent environments - All developers work in identical environments matching production
  • Fast onboarding - New team members get working development setup in minutes, not days
  • Reproducible builds - Eliminate environment-related bugs by developing in containers matching production

Real-time Data Explorer lets you inspect data flowing through your pipelines as it happens. Open any topic and see messages streaming in with full contents visible—JSON payloads, timestamps, keys, and metadata. This makes debugging interactive: watch data transform as it moves through pipeline stages, verify that sensors are producing expected values, or check that enrichment is working correctly. The explorer helps you understand data structure, validate transformations, and troubleshoot issues by seeing actual messages rather than inferring from logs.

Key benefits:

  • Live debugging - Watch data flow through pipelines in real-time to verify transformations work correctly
  • Data structure discovery - Inspect message formats and schemas without reading documentation
  • Validation - Confirm sensors, transformations, and integrations produce expected output

Message Visualizer provides instant visualization of time-series data flowing through topics. Select parameters from your data streams and see them plotted in real-time charts. This helps engineers understand data behavior visually—spot patterns, identify anomalies, or compare multiple signals without writing visualization code. The visualizer works directly on raw topic data, so you can quickly check if sensor data looks reasonable, verify that transformations produce expected patterns, or share live visualizations with team members during debugging sessions.

Key benefits:

  • Instant visualization - Plot time-series data from topics without writing visualization code
  • Pattern recognition - Identify data anomalies, trends, and relationships through visual inspection
  • Collaborative debugging - Share live visualizations with team members to discuss data behavior

Topic Metrics expose key Kafka performance indicators for topics and consumers. Track message throughput, consumer lag (how far behind consumers are), and partition offsets in real-time. These metrics help you ensure consumers are keeping up with producers, identify slow consumers that might be falling behind, and spot throughput bottlenecks in your data flows. When consumers start lagging or throughput drops unexpectedly, Topic Metrics make the problem visible immediately so you can investigate and fix issues before they impact downstream systems.

Key benefits:

  • Consumer health - Monitor lag to ensure consumers keep up with message production rates
  • Throughput visibility - Track messages per second to identify bottlenecks or unexpected traffic patterns
  • Early warning - Detect performance issues before they cascade to downstream applications or cause outages

Git Integration makes every Quix project a Git repository with seamless integration to GitHub, GitLab, Bitbucket, or any Git provider. Projects track all changes—applications, topic definitions, environment variables—in version control. Use standard Git workflows: branch for new features, create pull requests for review, merge to deploy. This enables continuous integration and delivery with your existing CI/CD tools and processes. Your data platform infrastructure becomes code, with all the benefits of version control, collaboration, and automation that come with Git-based development.

Key benefits:

  • Version control - Track all pipeline changes with Git history, branches, and commit messages
  • Standard workflows - Use pull requests, code review, and merging for data pipeline development
  • CI/CD enabled - Integrate with existing continuous integration and deployment tools through Git hooks

Environment Management coordinates multiple environments within a project, each tied to a specific Git branch. Development environments track feature branches, staging tracks the develop branch, and production tracks main. This creates automatic alignment between Git workflow and infrastructure—merge a pull request and sync to deploy changes through environments. Each environment maintains its own deployed applications, topics, and configuration, while sharing the same underlying project structure and Git history.

Key benefits:

  • Git-aligned deployment - Environments automatically track Git branches for consistent code-to-infrastructure mapping
  • Isolated testing - Maintain separate environments for development, staging, and production with independent resources
  • Simplified promotion - Move changes through environments using standard Git merge operations and synchronization

Infrastructure as Code lets you define your entire pipeline infrastructure in YAML files stored in Git. Deployments, topics, environment variables, and configuration live in quix.yaml and app.yaml files alongside your application code. This makes infrastructure changes visible in Git diffs, reviewable in pull requests, and automatically applied through synchronization. No more clicking through interfaces to configure resources—infrastructure definitions are code that you edit, version, and deploy like any other file in your repository.

Key benefits:

  • Declarative configuration - Define infrastructure in YAML rather than manual configuration or point-and-click interfaces
  • Version controlled - Track all infrastructure changes in Git with full history and rollback capability
  • Review before deploy - Infrastructure changes go through pull requests and code review like application code

Secrets Management provides secure storage for sensitive information like API keys, database credentials, and authentication tokens. Secrets are encrypted at rest and accessed through environment variables at runtime, never hardcoded in application code or configuration files. Different secrets can be defined per environment—development uses test credentials while production uses prod credentials, both referencing the same variable names in code. This separation enhances security, simplifies credential rotation, and ensures applications work consistently across environments without exposing sensitive data.

Key benefits:

  • Secure storage - Encrypt sensitive credentials and prevent them from appearing in code or Git repositories
  • Environment separation - Use different credentials per environment while keeping application code unchanged
  • Easy rotation - Update credentials in one place without modifying application code or redeploying containers

YAML Variables let you define values once in quix.yaml and reference them throughout your project configuration. Instead of repeating the same database connection string across multiple deployments, define it as a variable and reference it wherever needed. Variables can differ by environment—set one value for development and another for production, while the application YAML files remain identical. This reduces duplication, prevents configuration errors from inconsistent values, and makes environment-specific settings explicit and manageable.

Key benefits:

  • Eliminate duplication - Define common values once and reference them across multiple applications
  • Environment-specific config - Set different variable values per environment without changing application definitions
  • Consistent configuration - Prevent errors from mismatched values when the same setting appears in multiple places

CLI Commands provide command-line access to Quix Cloud operations, enabling integration with any CI/CD system. Deploy applications, synchronize environments, manage topics, or query status through shell commands in your GitHub Actions, GitLab CI, Jenkins, or custom automation scripts. This unlocks workflow automation beyond the web interface—trigger deployments on Git pushes, run tests against Quix environments, or orchestrate complex multi-step operations. The CLI brings Quix into your existing automation toolchain rather than forcing you to adapt to Quix-specific workflows.

Key benefits:

  • CI/CD integration - Automate Quix operations in GitHub Actions, GitLab CI, or any pipeline tool
  • Scriptable operations - Build custom automation workflows combining Quix commands with other tools
  • Flexible deployment - Trigger deployments, tests, or synchronization from any environment with CLI access

User Management lets you invite team members to your organization and control what they can access and modify. Assign users to specific projects, grant read-only or full access, and manage permissions at the organization and project level. This enables secure collaboration—engineers work on their assigned projects without accessing sensitive production systems, managers maintain visibility across all projects, and administrators control organization-wide settings. User permissions integrate with your organizational structure rather than forcing everyone into a single access level.

Key benefits:

  • Controlled access - Grant users exactly the permissions they need for their role without over-provisioning access
  • Team collaboration - Invite team members to work on specific projects while protecting other resources
  • Audit-ready - Track who has access to what for compliance and security requirements

Enterprise-grade authentication through integration with your existing identity providers like Active Directory or Google Workspace. Users access Quix with their corporate credentials while administrators manage permissions centrally. Available exclusively on Enterprise plans, SSO provides audit logging, role-based access control, and rapid provisioning to meet security, compliance, and governance requirements.

Key benefits:

  • Meet enterprise security and compliance requirements — Centralized access control with audit trails, role-based permissions, and integration with MFA for visibility and governance
  • Onboard and offboard users instantly from one dashboard — Provision access to projects and environments in seconds or revoke all permissions immediately when team members leave
  • Engineers access tools without credential friction — Single login to Quix using existing corporate credentials eliminates password resets and authentication delays across projects

Personal Access Tokens (PAT) enable secure API authentication for scripts, applications, and CI/CD tools accessing Quix. Generate tokens with specific scopes and expiration dates, use them in automation workflows, and revoke them when no longer needed. Unlike password-based authentication, tokens can be created for specific purposes—one token for CI/CD deployments, another for monitoring scripts—and revoked individually without affecting other access. This provides secure API access with fine-grained control over what each token can do.

Key benefits:

  • Secure automation - Authenticate scripts and CI/CD pipelines without sharing passwords or user credentials
  • Granular control - Create tokens with specific scopes for different purposes and revoke individually
  • Audit trail - Track API access by token to understand what automation is accessing which resources

Broker Configuration provides control over Kafka broker settings when defaults don't fit your requirements. Adjust retention policies, tune performance parameters, configure network access, or set resource limits to match your organization's needs. This flexibility matters for organizations with specific compliance requirements, performance characteristics, or operational constraints that need broker-level customization. While most users work with sensible defaults, Broker Configuration gives you access to underlying Kafka settings when you need them.

Key benefits:

  • Custom requirements - Tune broker settings to match your specific performance, compliance, or operational needs
  • Performance optimization - Adjust retention, batching, and resource settings for your workload characteristics
  • Network control - Configure broker networking to integrate with organizational security policies

The Permission System provides role-based access control with granular permissions for different actions and resources. Define who can view projects, edit pipelines, manage users, deploy to production, or access sensitive data. Permissions work at multiple levels—organization-wide roles for administrators, project-specific access for engineering teams, and read-only access for stakeholders. This ensures teams have the access they need while protecting critical resources from accidental changes or unauthorized access.

Key benefits:

  • Role-based security - Assign permissions based on job function rather than managing individual access rights
  • Granular control - Separate read, edit, and deploy permissions to limit what each user can modify
  • Production protection - Restrict deployment permissions to prevent accidental changes to critical environments

The Auditing System captures detailed logs of all user actions, configuration changes, deployments, and system events. Track who deployed what application, when configuration changed, which user accessed which resources, and what the system was doing during an incident. These logs support both compliance requirements (proving who did what for regulatory audits) and operational troubleshooting (understanding what changed before a problem started). Audit logs are immutable and queryable, providing complete history for security reviews and incident investigation.

Key benefits:

  • Compliance ready - Maintain complete audit trails for regulatory requirements and certification processes
  • Incident investigation - Reconstruct what happened during problems by reviewing who did what and when
  • Security monitoring - Detect unusual access patterns or unauthorized actions through comprehensive logging

Project Visibility lets you control who can see and access each project and its environments. Make projects visible to entire teams for collaboration, restrict sensitive production projects to specific users, or create private projects for early-stage development. This separation ensures engineers see the projects relevant to their work without cluttering their view with unrelated infrastructure. Visibility controls work alongside permissions—even if users can see a project, their permissions determine what they can actually do with it.

Key benefits:

  • Organized workspace - Users see only projects relevant to their work, not the entire organization's infrastructure
  • Sensitive project protection - Restrict visibility of production or sensitive projects to authorized personnel only
  • Team-based collaboration - Share projects with specific teams while keeping other projects private

Live Logs provide real-time streaming of application logs from all deployed services. As your applications write logs, they appear instantly in the Quix interface—no delay, no manual fetching. This makes troubleshooting immediate: deploy a change, watch the logs stream in, and see errors as they happen. You can filter logs by deployment, search for specific messages, or watch multiple applications simultaneously. Live Logs eliminate the cycle of deploy, wait, fetch logs, repeat—you see what's happening right now.

Key benefits:

  • Immediate visibility - See application logs in real-time as events happen, not minutes later
  • Fast troubleshooting - Identify errors and issues instantly during development or incident response
  • Multiple deployments - Monitor logs from several applications simultaneously to track distributed workflows

Historical Logs store application logs from past deployments for retrospective analysis. When investigating an issue that happened yesterday, last week, or months ago, retrieve the relevant logs to understand what your applications were doing at that time. Download logs for offline analysis, search through historical events to find patterns, or compare current behavior with past deployments. This is essential for debugging issues that don't reproduce consistently and for understanding how systems behaved before configuration changes.

Key benefits:

  • Retrospective debugging - Investigate past issues by accessing logs from when problems occurred
  • Offline analysis - Download historical logs for detailed analysis in external tools
  • Behavior comparison - Compare logs from different time periods to identify what changed before problems started

Build Logs capture the complete output from Docker container builds. When builds fail, you get detailed logs showing exactly which step failed, what error occurred, and the full context needed to fix it. These logs are centralized and accessible from the Quix interface—no need to SSH into build servers or tail log files on remote machines. Build Logs make deployment problems transparent: see dependency installation, compilation output, and any warnings or errors that occurred during the build process.

Key benefits:

  • Build debugging - See exactly why builds fail with complete output from each build step
  • Centralized access - Review build logs from the Quix interface instead of accessing build infrastructure
  • Deployment confidence - Verify builds completed successfully with warnings and errors visible before deployment

Deployment Metrics show resource consumption for every deployed application. Track CPU utilization, memory usage, and disk space over time to understand performance characteristics and identify resource constraints. These metrics help you right-size deployments—provision enough resources to handle load without over-allocating expensive compute. When applications slow down or fail, metrics show whether the cause is resource exhaustion. Monitoring is automatic for all deployments, giving you visibility into infrastructure utilization without additional configuration.

Key benefits:

  • Resource optimization - Right-size CPU and memory allocation based on actual usage patterns
  • Performance troubleshooting - Identify when applications are CPU or memory constrained causing slowdowns
  • Cost management - Reduce over-provisioned resources by seeing what deployments actually consume

Advanced Logs and Metrics provide long-term storage and querying of logs and performance data using Loki and Prometheus. Beyond the standard log retention, Enterprise accounts get extended history with powerful query capabilities. Use PromQL to analyze metric trends over time, create custom dashboards, set up alerts on complex conditions, or export data for correlation with external systems. This enables sophisticated performance analysis—identifying patterns that emerge over weeks or months, comparing resource usage across deployments, and building historical baselines for capacity planning.

Key benefits:

  • Long-term analysis - Access months of historical logs and metrics for trend analysis and capacity planning
  • Powerful querying - Use PromQL and LogQL for complex analysis of metrics and log patterns
  • Custom monitoring - Build specialized dashboards and alerts based on your specific operational requirements

A managed service that consolidates configuration metadata from heterogeneous sources—test rigs, simulation models, device parameters—into versioned JSON records. The service provides an API, Kafka event streams, and datastore that automatically link configuration snapshots to test data, publish configuration change events, and enable real-time enrichment of telemetry with configuration context.

Key benefits:

  • Track exact system state for every test run: Link configuration versions to test data for certification traceability and root-cause analysis
  • Enrich streaming data without custom integration: Join configuration metadata with real-time telemetry using built-in lookup operations in Quix Streams
  • Build custom configuration workflows in Python: Ingest from any source using the service API, not forced into rigid UI forms

A managed data lakehouse built on blob storage that unifies configuration metadata and measurement data in open formats. QuixLake provides a SQL query layer over Parquet and Avro files stored in your cloud account, enabling engineers to analyze test data across campaigns using APIs, Python SDKs, or SQL queries without managing Iceberg tables or Spark clusters.

Key benefits:

  • Store unlimited test data at blob storage cost: Pay cloud storage rates, not database prices, while maintaining fast query performance through Parquet optimization
  • Query from any tool without vendor lock-in: Access data via Python, SQL, Jupyter notebooks, or custom applications using open formats
  • Single source of truth for all test phases: Unified storage for MIL, SIL, HIL, and vehicle test data with configuration context joined automatically

A managed connector that writes streaming data from Kafka topics to QuixLake. Automatically handles batching, Parquet serialization with compression, schema evolution, partition management, and commit conflict resolution. Engineers deploy with Python while the service manages Iceberg protocols, catalog integration, and storage optimization without requiring expertise in lakehouse implementation.

Key benefits:

  • Automatic schema evolution without manual intervention: Adapts table schemas as data structures change, eliminating pipeline maintenance for evolving data models
  • Built-in conflict resolution and retry logic: Handles concurrent write conflicts automatically with backpressure management, ensuring reliable ingestion at scale
  • Optimized Parquet batching for query performance: Balances write latency and file sizes to minimize storage costs and maximize query speed

A managed service that searches, filters, and replays historical test data from QuixLake through processing pipelines. Select specific test sessions by time range or metadata tags, then reprocess them with updated algorithms, corrected configurations, or new models into existing or new projects without re-running physical tests or simulations.

Key benefits:

  • Fix data quality issues after tests complete: Correct configuration errors or processing bugs by reprocessing historical sessions with updated parameters
  • Develop models on years of test data: Train and validate new algorithms against historical datasets without expensive test rig time
  • Reuse test data for new projects: Query existing test campaigns and replay them through new pipelines, maximizing ROI on physical testing

Data Query Engine provides a unified API for accessing all test and R&D data in your organization. Instead of engineers hunting for files in folder structures on desktops or building custom data access code, they query through a standard interface with consistent methods for filtering, sorting, and retrieving data. This moves organizations from distributed file-based storage to centralized cloud data management with programmatic access. Engineers write code against a stable API regardless of how data is stored underneath, enabling self-service access without IT intervention.

Key benefits:

  • Centralized access - Query all R&D data through a single API instead of navigating file systems and folders
  • Consistent interface - Use the same query patterns across all data sources and test types
  • Self-service - Engineers access data programmatically without IT requests or custom integration code
Accelerating test and development at:
Case study
Secure

ISO27001 compliant

Certified to the most stringent industry standard for information security. Rest assured your data and code is in safe hands.
Performant

Low latency at scale

Built by Formula 1 engineers and in production with Formula 1 teams, Quix is built to process high volume and velocity data with low latency.
Reliable

99.99% SLA

Forget downtime. Quix is designed to deliver high availability for the most demanding use cases.

Choose the deployment option that fits your business

Multi-tenant cloud

Get started with our serverless Team plan for full flexibility. Quix Cloud Business offers dedicated node pools for production workloads and private networking.

Single tenant cloud

Run the entire Quix platform in a dedicated AWS, GCP or Azure account, or your own private cloud (BYOC). Gain full control over your environment with dedicated resources, custom network configurations, and the ability to meet strict compliance and performance requirements.

On-premise

Run Quix on-premise for non-negotiable security and compliance, or where serious scale meets extra low latency.