Getting Started
Constellaxion is a CLI tool for deploying and fine-tuning open-source models on your private cloud. Whether youβre working with foundation models or fine-tuning with your own dataset, Constellaxion helps you go from config to cloud jobs in just a few commands.
- Installation
- Prerequisites
- Initializing a Project
- Deploying a Foundation Model
- Fine-Tuning a Model
- Prompting a Model
π¦ Installationβ
pip install constellaxion
π§ Prerequisitesβ
Constellaxion is designed to be cloud-agnostic, but each environment has its own setup.
π Initializing a Projectβ
To begin, create a directory with a model.yaml configuration file.
For model deployment:β
- Only the
model.yamlfile is needed
For fine-tuning:β
- Include
train.csv,val.csv, andtest.csv - These CSVs should contain two columns:
prompt,response
Then run:
constellaxion init
This generates a job.json file based on your model.yaml.
π Deploying a Foundation Modelβ
To deploy a foundation model, run:
constellaxion model deploy
Example model.yaml for foundation model deployment:β
model:
id: cxn-foundation-model
base: "tiiuae/falcon-7b-instruct"
deploy:
gcp:
project_id: your-project-id
region: europe-west2
What happens under the hood β
π― Fine-Tuning a Modelβ
Train a model on your dataset with:
constellaxion model train
This kicks off a custom training job on Vertex AI using your local data and model base.
Then serve it with:
constellaxion model serve
Example model.yaml for fine-tuning:β
model:
id: crypto-sentiment-v2
base: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
dataset:
train: ./train.csv
val: ./val.csv
test: ./test.csv
training:
epochs: 1
batch_size: 16
deploy:
gcp:
project_id: your-project-id
region: us-central1
π¬ Prompting a Modelβ
Chat directly with a deployed model using:
constellaxion model prompt
Your terminal becomes a chat window for the deployed model (defined in job.json).
Type exit or quit to stop.
π§ Supported Models
We are actively expanding support for more models and cloud environments.