Cortiqa API.
Not ready yet. But it will be.
We are building an API that will let you integrate Cortiqa's AI capabilities directly into your own products and workflows — code generation, analysis, conversational AI, and more.
We do not have our own model yet. We are working on it. This page is here to tell you what we are planning, not to sell you something that does not exist. When it is ready, you will be the first to know.
Join the waitlistWhere we stand
Right now, Cortiqa's products — Cordenex and Corde — use third-party AI models under the hood. They work well, and we are proud of what we have built on top of them.
But our long-term goal is to develop our own models, optimized specifically for code understanding, generation, and developer workflows. That is what the Cortiqa API will be built on.
We are not going to release an API that is just a wrapper around someone else's model. When we ship this, it will be built on our own technology. That takes time, and we would rather be honest about that than launch something half-baked.
What we are building toward
Product development
Cordenex and Corde are live and improving. We are learning from real usage patterns, understanding what developers need, and refining our approach to code AI.
Model research and development
Building our own models trained on properly licensed, public code data. Focused on code understanding, generation accuracy, and context-aware completions for real-world projects.
Private API beta
Early access to the Cortiqa API for a small group of developers and companies. REST and streaming endpoints for code generation, analysis, and conversational AI.
Public API launch
General availability with documentation, SDKs, rate limiting, usage-based pricing, and enterprise support. Self-hosted options for organizations that need them.
Planned API capabilities
These are planned features. Nothing on this list is available yet. Specifics may change as development progresses.
Code generation
Generate code from natural language descriptions. Specify language, framework, and style. Context-aware generation that understands project structure when provided.
Code analysis
Submit code for review, bug detection, security analysis, and performance suggestions. Get structured responses with line-level annotations and severity ratings.
Code explanation
Send code and receive plain-language explanations of what it does, how it works, and why it is written that way. Configurable detail level from summary to line-by-line.
Conversational AI
Multi-turn conversation endpoint for building chat interfaces, copilots, and assistants. Persistent context within sessions. Configurable system prompts and behavior.
Code transformation
Refactor, translate between languages, modernize legacy code, and convert between frameworks. Input code in one format, get it back in another.
Embeddings
Generate embeddings for code and natural language. Use them for semantic search, similarity matching, clustering, and building your own retrieval-augmented systems.
What it will look like
This is a preview of the API design we are working toward. The interface will be familiar to anyone who has used modern AI APIs. Endpoints, request formats, and response structures may change.
Planned endpoints
API design principles
Simple and predictable
REST endpoints with JSON request and response bodies. Standard HTTP status codes. Consistent naming conventions. If you have used any modern API, this will feel familiar.
Streaming support
All generation endpoints will support server-sent events for real-time streaming. Build responsive interfaces that show output as it is generated, not after.
Comprehensive SDKs
Official client libraries for Python, JavaScript/TypeScript, Go, and Rust. Each SDK will handle authentication, retries, streaming, and error handling out of the box.
Usage-based pricing
Pay for what you use, measured in tokens. No seat licenses, no minimum commitments on standard plans. Enterprise customers get volume discounts and custom terms.
Rate limiting with transparency
Clear rate limits per plan tier. Rate limit headers on every response so you always know where you stand. Automatic backoff recommendations in error responses.
Extensive documentation
API reference, quickstart guides, code examples in every supported language, and interactive playground. Documentation will be treated as a first-class product.
Planned SDK support
Official client libraries will be available for the following languages at launch. Community libraries for other languages will be supported through our open-source program.
Python
pip install cortiqaFull SDK with sync and async support. Compatible with Python 3.8+.
JavaScript / TypeScript
npm install cortiqaWorks in Node.js and edge runtimes. Full TypeScript types included.
Go
go get github.com/cortiqa/cortiqa-goIdiomatic Go client with context support and streaming.
Rust
cortiqa = "0.1"Async client built on tokio. Zero-copy streaming support.
Planned security model
Data handling
- API inputs will not be used for model training
- Request and response content will not be stored after processing
- All API traffic encrypted with TLS 1.3
- API keys with granular scoping and rotation
- Usage logs contain metadata only, not request content
- Self-hosted API option for enterprise customers
Authentication and access
- API key authentication with Bearer token
- Multiple keys per account with different permissions
- Key rotation without downtime
- IP allowlisting available on enterprise plans
- OAuth 2.0 for user-facing applications
- Webhook signing for secure callbacks
Common questions
When will the API be available?
We do not have a firm date. We are building our own models first, and that work is ongoing. We will open a private beta when the technology is ready, not before. Join the waitlist and we will notify you as soon as early access is available.
Why not just wrap an existing model and launch now?
We could, but that is not what we want to build. A wrapper around someone else's model does not give us control over quality, pricing, privacy, or performance. We want to offer something genuinely ours — optimized for code, with the privacy guarantees our users expect.
Will the API use the same models as Cordenex and Corde?
Eventually, yes. The goal is for all Cortiqa products to run on our own models. Cordenex and Corde currently use third-party models while we develop our own. When our models are ready, all products — including the API — will transition to them.
Will there be a free tier?
That is the plan. We want developers to be able to experiment and build prototypes without a credit card. The free tier will have usage limits, but it will give you access to the full API surface.
Can I use the API to build commercial products?
Yes. The API will be available for commercial use on all plans. There will be no restrictions on what you build with it, as long as it complies with our acceptable use policy.
Will there be self-hosted options for the API?
Yes, for enterprise customers. When our models are ready, we will offer on-premise deployment so organizations can run the full inference stack within their own infrastructure.
We are building this the right way.
That means building it on our own terms.
The Cortiqa API will be available when it is ready — built on our own models, with the quality and privacy standards we hold ourselves to. No shortcuts.
If you want to be notified when early access opens, join the waitlist. If you have a specific use case you would like to discuss, reach out directly.