Help & Documentation
Quick Start Guide
- Create a Workspace: Set up your first workspace to organize your prompts and collaborate with team members.
- Configure Model Endpoints: Go to Settings → Model Endpoints to add your AI provider configurations (OpenAI, Anthropic, etc.).
- Create Your First Prompt: Use the rich editor to create prompts with variables and structural elements.
- Test and Iterate: Use the execution window to test your prompts and track performance.
Key Features
Rich Prompt Editor
- Variable insertion with
- XML structural elements:
<role>
,<task>
,<instructions>
- Real-time preview with syntax highlighting
- Auto-save and version control
Team Collaboration
- Workspace-based organization
- Role-based access control (Admin/User)
- Template approval workflows
- Shared prompt libraries
Model Endpoints
Configure AI model endpoints in Settings → Model Endpoints:
Pre-configured Providers:
- OpenAI: GPT-3.5, GPT-4, GPT-4o
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
- Google AI: Gemini Pro, Gemini Ultra
Custom Endpoints:
- Promptly (Local): Local models (Llama, Mistral) - Recommended
- vLLM: High-performance inference
- Custom APIs: Any OpenAI-compatible endpoint
Best Practices
Prompt Design
- Use clear, specific instructions
- Structure prompts with XML elements
- Test with different model endpoints
- Document your prompt's purpose
Organization
- Use descriptive titles and tags
- Group related prompts in categories
- Promote successful prompts to templates
- Track execution metrics and costs
Need More Help?
If you need additional assistance: