SeekrFlow pricing

Customize LLMs in one place and only pay for what you use.

Contact Us

Inference of base models (serverless)

Prices are per 1 million tokens including input and output tokens for LLM models.

Model Size
(bn parameters)
PPM Input PPM Output
0-4 $2.15 $2.15
4.1-8 $3.25 $3.25
8.1-21 $4.50 $4.50
21.1-41 $6.50 $6.50
41.1-80 $8.00 $8.00
80.1-110 $10.00 $10.00

Fine-tuning models

Prices are per 1 million tokens including input and output tokens for LLM models.

Model Size
(bn parameters)
PPM Training
0-4 $1.90
4.1-8 $2.40
8.1-21 $3.00
21.1-41 $4.00
41.1-80 $6.00
80.1-110 $9.00

Inference of fine-tuned models (serverless)

Prices are per 1 million tokens including input and output tokens for LLM models.

Model Size
(bn parameters)
PPM Input PPM Output
0-4 $2.50 $2.50
4.1-8 $3.50 $3.50
8.1-21 $5.00 $5.00
21.1-41 $7.00 $7.00
41.1-80 $9.00 $9.00
80.1-110 $12.00 $12.00

Dedicated compute for inference

For non-LLM and LLM (base or fine-tuned models) workloads, customers can reserve dedicated compute for model usage. It is charged on a price per hour per GPU/AI Accelerator basis.

Price per hour per GPU/AI Accelerator
$3.33

SeekrFlow software licensing

License SeekrFlow software to run on your infrastructure of choice: on-premises hardware, public, private, and hybrid clouds, or pre-installed on an AI Accelerator Appliance.

License
(per GPU/AI Accelerator)
Price Critical Support
1 year $1,800 $360
3 years $5,400 $1,080
5 years $7,200 $1,440

Build and run trusted AI in one place

Contact Us