<--- Back to all resources
Streamkap CLI: Manage Streaming Pipelines from the Command Line
The Streamkap CLI brings pipeline management to your terminal. Create connectors, monitor pipelines, and automate workflows without leaving the command line.
Managing streaming data pipelines through a web UI works well for exploration and one-off tasks, but it slows you down when you need to repeat actions across environments, script deployments, or integrate pipeline management into existing tooling. The Streamkap CLI puts full pipeline control in your terminal, so you can manage connectors, check pipeline health, and automate workflows with the same tools you already use for infrastructure.
Why a CLI for Streaming Pipelines
Data engineers and platform teams spend much of their day in the terminal. Whether you are writing Terraform configs, debugging Kubernetes pods, or deploying application code through CI/CD, context-switching to a browser-based UI breaks your flow.
A CLI also opens the door to automation. You can write shell scripts that provision pipelines as part of environment setup, build health checks into your deployment process, or version-control your connector configurations alongside application code. None of that is practical with a point-and-click interface.
The Streamkap CLI was built with these workflows in mind. It covers the full pipeline lifecycle: creating and configuring connectors, starting and stopping pipelines, monitoring throughput and lag, and managing your account, all through a consistent command interface.
Installation and Setup
Install the CLI globally via npm:
npm install -g @streamkap/tools
This requires Node.js 20 or later. Full installation details are in the CLI documentation.
Once installed, authenticate with your API credentials:
streamkap auth login --client-id YOUR_CLIENT_ID --client-secret YOUR_CLIENT_SECRET
You can generate API credentials (Client ID and Client Secret) from your Streamkap dashboard. The CLI stores your credentials locally and handles token refresh automatically, so you only need to authenticate once per machine. For CI/CD environments, set environment variables instead:
export STREAMKAP_CLIENT_ID="your-client-id"
export STREAMKAP_CLIENT_SECRET="your-client-secret"
Verify your setup:
streamkap auth status
This confirms that the CLI can reach the Streamkap API and your credentials are valid.
Key Commands
The CLI groups commands under resource types: pipelines, connectors, transforms, and account. Here is a walkthrough of the commands you will use most.
List Pipelines
streamkap pipelines list
This returns a table of all pipelines in your account, including their status, source and destination connectors, and last activity timestamp. Add --format json to get machine-readable output for scripting:
streamkap pipelines list --format json
Create a Connector
Connectors are the source and destination endpoints of a pipeline. Create one by specifying the type and passing configuration as flags or from a JSON file:
streamkap connectors create \
--type source \
--connector postgresql \
--name "production-orders-db" \
--config config/postgres-source.json
The config file holds connection details, table selections, and any connector-specific settings. You can also pass individual properties inline:
streamkap connectors create \
--type destination \
--connector snowflake \
--name "analytics-warehouse" \
--set hostname=xy12345.snowflakecomputing.com \
--set database=ANALYTICS \
--set schema=PUBLIC
Check Pipeline Status
Get a quick health check on a running pipeline:
streamkap pipelines status my-pipeline-id
This shows the pipeline state (running, paused, or errored), the current replication lag, and the number of records processed in the last hour. For continuous monitoring in a terminal session, add the --watch flag:
streamkap pipelines status my-pipeline-id --watch
View Metrics
Pull throughput and lag metrics for a specific time window:
streamkap metrics get my-pipeline-id --period 24h
Output includes records per second, bytes transferred, and average end-to-end latency. Combine this with --format json to pipe metrics into monitoring tools or alerting scripts.
Manage Configurations
Export a connector’s full configuration to a file:
streamkap connectors export my-connector-id > connector-config.json
Edit the file locally, then apply changes:
streamkap connectors update my-connector-id --config connector-config.json
This pattern makes it easy to version-control configurations in Git and promote changes across environments (development to staging to production) using standard code review processes.
Automation and Scripting
The CLI is built for scripting. Every command supports --format json output and returns meaningful exit codes, so you can compose commands in shell scripts and handle errors cleanly.
Here is an example script that provisions a complete pipeline from config files:
#!/bin/bash
set -e
# Create source connector
SOURCE_ID=$(streamkap connectors create \
--type source \
--connector postgresql \
--name "orders-source" \
--config configs/source.json \
--format json | jq -r '.id')
# Create destination connector
DEST_ID=$(streamkap connectors create \
--type destination \
--connector snowflake \
--name "warehouse-dest" \
--config configs/destination.json \
--format json | jq -r '.id')
# Create and start the pipeline
streamkap pipelines create \
--name "orders-to-warehouse" \
--source "$SOURCE_ID" \
--destination "$DEST_ID" \
--start
echo "Pipeline created and running."
You can check on pipeline health in batch too. This snippet flags any pipelines with replication lag above a threshold:
streamkap pipelines list --format json | jq -r '
.[] | select(.lag_seconds > 300) | "\(.name): \(.lag_seconds)s behind"
'
CI/CD Integration
Integrating the Streamkap CLI into your deployment process lets you treat pipeline configuration as code. A few patterns that work well:
Environment promotion. Store connector configs in your repo under streamkap/configs/. In your CI pipeline, run streamkap connectors update after deploying application changes to keep pipelines in sync with schema migrations.
Pre-deploy health checks. Before deploying a new application version, verify that all pipelines are healthy:
streamkap pipelines list --format json | jq -e 'all(.status == "running")'
If any pipeline is in an error state, the jq -e flag causes a non-zero exit, failing the CI step.
Automated rollback. If a schema migration breaks a pipeline, your CI/CD system can detect the error and automatically pause the affected pipeline, apply a rollback migration, and restart:
streamkap pipelines pause my-pipeline-id
# ... run rollback migration ...
streamkap pipelines resume my-pipeline-id
Pairing with the Streamkap MCP Server
The CLI is one half of Streamkap’s developer experience toolkit. The other half is the Streamkap MCP Server, which lets AI-powered coding assistants interact with your pipelines through natural language.
Where the CLI excels at scripted, repeatable operations, the MCP Server is ideal for ad-hoc exploration: asking an AI assistant to check pipeline health, describe a connector’s configuration, or troubleshoot an error. Together, they cover both the automated and interactive sides of pipeline management.
For example, you might use the MCP Server to investigate a lag spike during development, then encode the fix as a CLI command in your deployment scripts so it never happens again.
Get Started
The Streamkap CLI is available now. Install it, authenticate with your API key, and start managing pipelines from your terminal in minutes.
- Install:
npm install -g @streamkap/tools - Authenticate:
streamkap auth login --client-id YOUR_CLIENT_ID --client-secret YOUR_CLIENT_SECRET - Explore:
streamkap --help
Full command reference and configuration guides are available in the CLI documentation. If you do not have a Streamkap account yet, sign up for a free trial to get your API credentials and start building pipelines from the command line.