Prompting guide
Grafana Assistant relies heavily on the context you provide to perform its tasks effectively. Some context is automatically included, while other context you provide by using the @
key or by clicking the @
icon in the sidebar chat box.
The most influential input is the text you type into the chat box.
Use mentions to add context
Using the @
mention feature helps the assistant provide more accurate and efficient responses. When you specify a data source (such as Prometheus, Loki, or Tempo) or other specific items, the assistant can:
- Focus on relevant metrics, logs, or traces
- Use the correct query language (PromQL, LogQL, etc.)
- Provide data source-specific best practices
- Save time by avoiding the need to determine the data source type
- Give more precise and actionable recommendations
If you know what you’re looking for, be specific. Include technical details like metric names. Use the @
mention helper by typing it or using the icon to add context.
For example, instead of asking “Show me error rates”, specify “Show me error rates from @datasource” or click the @
icon and select it from the pop-up menu.
Ten tips for effective prompting
When prompting Grafana Assistant, we recommend following these ten guidelines:
1. Be clear and specific about what you want to achieve
Bad: Tell me about my logs
Good: Have you noticed an uptick of errors in the logs in @my-logs-datasource?
2. Break down complex tasks into smaller, manageable steps
Bad: Make a dasard showing my CPU usage
Good: Find the CPU usage for my @datasource HTTP servers, then help me create a panel showing the 95th percentile
3. Provide relevant context using the @
feature when referencing code or documentation
Bad: How do I fix this error?
Good: I'm seeing this error in @my-dasard: "Failed to fetch data from Prometheus". Can you help me troubleshoot?
4. Remember that the assistant maintains context within a conversation
The assistant will assume you are referring to things previously mentioned in the same conversation.
Bad: What's wrong with it?
Good: Looking at the error rate panel we just created, why is it showing spikes every 5 minutes?
5. Start a new conversation when switching topics
Start fresh when you begin a new topic or if the chat has been going on for too long.
Bad: (continuing a long conversation about dasards) Now help me with my Kubernetes setup
Good: (starting new conversation) I need help setting up Kubernetes monitoring with @prometheus-datasource
6. Use natural language rather than trying to write commands
Bad: /query rate(http_requests_total[5m])
Good: Can you help me calculate the request rate over the last 5 minutes from @prometheus-datasource?
7. Refine your request if the response isn’t quite what you need
If the response isn’t what you expected, rephrase your request or provide additional context.
Bad: That's not what I wanted
Good: I see you found the error rate, but I actually need to see the error count per endpoint. Can you modify the query to group by path?
8. Start broad, then refine based on the assistant’s response
Bad: Show me exactly how to fix my dasard
Good: I'm having issues with my dasard performance. What metrics should I look at first? ... Now that we've identified the high memory usage, how can I optimize the queries?
9. Politely correct the assistant and provide clarification when needed
If you receive an incorrect or incomplete response, politely correct the assistant and provide clarification.
Bad: That's wrong
Good: That's incorrect - the service is actually running on port 8080, not 9090. Could you adjust the query accordingly?
10. Include examples when explaining what you want to achieve
Bad: I want a dasard like the one in production
Good: I want to create a dasard similar to our production one that shows error rates, but I need it to include the new /api/v2 endpoints. Here's a screenshot of the current dasard for reference.
What’s next?
- Check out the full list of features
- Browse the frequently asked questions
- Understand usage and limits