Prompt & GenAI Types
Prompt types support managed prompt engineering workflows for generative AI. Unlike other product type extensions that attach metadata at the product level, prompt metadata extends individual fields within a product — each field represents a distinct prompt.
Available Types
Prompt Collection
A managed collection of AI prompts, supporting both single-turn and multi-turn (chat) formats.
- Icon: MessageSquare · Color:
#f97316(orange) - Created by: Manual registration
How Prompt Collections Work
A Prompt Collection is a data product where each field (row in the Schema tab) represents an individual prompt. The Schema tab is enhanced with:
- An Add Prompt button for creating new prompts
- A prompt editor modal for configuring each prompt's metadata
- Visual indicators showing model compatibility and output format
Prompt Field Metadata
Each prompt (field) in a collection carries the following metadata:
Message Sequence
The core of any prompt — the sequence of messages sent to the model:
[
{"role": "system", "content": "You are a helpful data analyst..."},
{"role": "user", "content": "Summarize the following dataset: {{data}}"},
{"role": "assistant", "content": "Here is a summary..."}
]
Multi-turn prompts can include system, user, and assistant messages to demonstrate the expected conversation flow.
Template Variables
Define placeholder variables that are injected at runtime:
[
{"name": "data", "type": "string", "description": "The dataset to analyze"},
{"name": "format", "type": "enum", "options": ["json", "markdown", "csv"]}
]
Model Configuration
| Property | Description | Example |
|---|---|---|
| Model Compatibility | Which LLM models this prompt works with | ["gpt-4", "claude-3-sonnet", "gemini-pro"] |
| Output Format | Expected response format | json, markdown, text, csv |
| Temperature | Sampling temperature | 0.7 |
| Max Tokens | Maximum output token count | 2048 |
Categorization
| Property | Description | Example |
|---|---|---|
| Use Case Category | What kind of task this prompt performs | summarization, extraction, classification, generation |
| Includes | References to other prompts (composition) | ["system_prompt_v2", "safety_guardrails"] |
Examples
Provide example inputs and expected outputs for documentation and testing:
// Example inputs
[{"topic": "AI safety", "format": "markdown"}]
// Example outputs
[{"summary": "## AI Safety Overview\n\nAI safety encompasses..."}]
Tab Availability
| Feature | Available? | Notes |
|---|---|---|
| Overview | ✅ | Standard overview |
| Schema | ✅ | Enhanced with prompt editor |
| Data Profiling | ❌ | Not applicable |
| Quality Health | ✅ | Standard quality checks |
| Lineage | ✅ | Track prompt dependencies |
| Governance | ✅ | Full governance suite |
| Versions | ✅ | Track prompt version history |
Use Cases
Prompt Collections are ideal for:
- Prompt Libraries — Centralized, versioned libraries of tested prompts
- AI Agent Playbooks — Multi-turn conversation templates for chatbots and agents
- Evaluation Suites — Prompt collections with example I/O for systematic model evaluation
- Governance & Compliance — Auditable records of prompts used in production AI systems
- Team Collaboration — Shared prompt repositories with ownership, review, and change tracking