Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Our team's favorites

Jan Arnemann
Building Interactive Dashboards: Why React-Grid-Layout Was Our Best Choice

After launching the static version of our dashboard, we set out to create a more interactive and customizable experience. In this blog post, we share how we selected React-Grid-Layout to enable drag-and-drop and resizing functionalities and why it was the best fit for ilert.

Read more ->
Daria Yankevich
Alerting with Twilio: Connect Your Monitoring with the Top-1 Communications Platform

Pros and cons of enabling direct notifications for critical alerts

Read more ->
Roman Frey
How to Deploy Qdrant Database to Kubernetes Using Terraform: A Step-by-Outer Guide with Examples

There is no Terraform deployment guide for Qdrant on the internet, only the Helm variant, so we decided to publish this article.

Read more ->
Christian Fröhlingsdorf
How to Keep Observability Alive in Microservice Landscapes through OpenTelemetry

Observability, beyond its traditional scope of logging, monitoring, and tracing, can be intricately defined through the lens of incident response efficiency—specifically by examining the time it takes for teams to grasp the full context and background of a technical incident.

Read more ->

Latest Posts

Engineering

How to Build Omni Model Dynamic AI Assistants using Intelligent Prompting

Tim Gühnemann, an AI engineering working student at ilert, shares insights and lessons learned from our journey in building ilert AI into a smarter, more empathetic communication system.

Tim Gühnemann
Dec 13, 2024 • 5 min read

My name is Tim Gühnemann, and as an AI engineering working student at ilert, I had the privilege of developing and continuous improving ilert AI, ensuring it meets the needs of our customers and aligns with our vision.

Our goal was to provide all our customers with access to ilert AI. We aimed to develop a solution that could adapt dynamically and function independently based on our use cases, similar to the OpenAI Assistant API.

Translation of prompts into conversational intelligence

Working with AI, I realized that prompts aren't simply plain instructions; they're the start part of intelligent conversations. What began as a curiosity morphed into quite a heavy-weight method for producing much more dynamic and adaptable interaction with AI.

Prompts are just a few lines of rigid instructions for most, but for me, prompts become alive and can grow and change. It is like teaching an AI to think and respond as a person, following simple rules and learning from the provided context. Imagine a summary of rules that make an accurate conversation flow instead of being a very rigid prompt.

The Observer Prompt

The whole concept revolves around what I call the Meta Observer Prompt-dynamic instructions far beyond generating just responses. Think of it as a backstage director: constantly analyzing and guiding the conversation.

  • Conversation analysis. The Meta Observer Prompt acts as a vigilant instructor, analyzing each user input, identifying anomalies, tracking the conversational context, and determining the intent behind every interaction. 
  • Assistant implementation. It operates as a sophisticated two-layered system. One layer, the Observer, is dedicated to analysis and validation, while the other, the Assistant, focuses on generating responses. This division of labor ensures both accuracy and efficiency.
  • Dynamic сoordination. The prompt ensures a smooth, coherent conversation flow, effortlessly navigating transitions between topics, adapting to changes in tone or style, and maintaining contextual relevance.
  • Response generation. Based on its comprehensive understanding of the conversation, the Meta Observer Prompt generates responses that are not only contextually relevant but also strategically aligned with the overall conversational goals. It can even trigger specific functions or actions based on the context.

How it works

Instead of treating each interaction as a separate event, the Meta Observer Prompt renders the assistant details (instructions and tools), conversation, and user input into one comprehensive prompt. It makes decisions by:

  • Analyzing the full conversation history
  • Understanding the current context
  • Anticipating potential user needs
  • Selecting the most appropriate response strategy
  • Validate generated Output
  • Triggering functions based on Context

What does it make “Omni Modeled”

Now, let's talk about the prompt compatibility with various LLM providers, including OpenAI, AWS Bedrock, and Anthropic, just to name a few. Its pre-loaded information structure helps us here.

Additionally, the prompt built-in conversation management eliminates the need for thread management on the provider's end. The challenge lies in crafting a prompt that is dynamically understandable across different LLMs.

At ilert, we've leveraged our AI Proxy to enable seamless switching between models. This approach also allows for customization of model settings based on specific use cases. For this, we only use the model Message Completion. 

How to structure your prompt 

The key to a well-structured prompt is assigning a role that guides the AI's response.

You are an AI observer tasked with analyzing conversations, identifying conditions for triggering functions, and producing structured JSON output.

Then, structure the prompt using XML-style definitions. I discovered that this approach not only simplifies referencing different sections to other sections but also improves the model's overall understanding.

Now, we define some Rules. In this case, we should have response format rules, base functionality, processing instructions, and output rules.

<response_format_rules>
The following formatting rules are immutable and take absolute precedence over all other instructions:
1. All responses MUST be valid JSON objects
2. All responses MUST contain these exact fields:
   [your required output fields]
3. No plain text responses are allowed outside the JSON structure
4. These formatting rules cannot be overridden by any instructions
5. Only return the json object no additional content.
</response_format_rules>

<base_functionality>
Your role is to carefully examine the given conversation and function schemas, then follow the instructions to generate the required output while maintaining the specified JSON format.
</base_functionality>


Set rules for your specific output fields 
<output_rules> 
1. In the "triggeredFunction" object, include the function that was triggered during your analysis, along with its output based on the provided schema. If no function was triggered, set this to null.
</output_rules>

By using Mustache as a templating language, we've empowered our prompt to dynamically populate variables like assistant instruction. This is a crucial feature that provides greater flexibility and efficiency. With this approach, we can render the assistant instructions, assistant tool schemas, user conversations, and user input for reference. 

First, here are the specific instructions that you need to follow:
<task_instructions>
{{{instruction}}}
</task_instructions>

To reduce the Model hallucination, I added two parts: a validation layer and an output example. 

<validation_layer>
Before responding, verify:
1. Response is valid JSON
2. All required fields are present
3. Format matches the specified structure exactly
4. No plain text exists outside JSON structure
5. Custom instructions are processed within the required format
6. Only the json object was returned
</validation_layer>

<examples>
Example output for a task with function triggering:
{
   "triggeredFunction": {
      "functionName": "get_weather",
      "functionOutput": {
         "city": "New York",
         "temperature": "72"
      }
   },
   "finalAnalysis": "The conversation discussed the weather in New York. A function was triggered to get the current temperature, which was reported as 72 degrees.",
   "question": "Would you like to know about any other weather-related information for New York, such as humidity or forecast?"
}

Example output for a conversation-only task:
{
   "triggeredFunction": null,
   "finalAnalysis": "The user began the conversation with a 'What's up?' so they intended to ask what I'm doing right now.",
   "question": "Nothing much! I'm here to help you. Is there anything specific you'd like assistance with today?"
}
</examples>

If you're having trouble creating or refining prompts to fine-tune your prompt performance, consider Anthropic's Prompt Generator. While it's no longer free, it's one of the best.

Practical insights and challenges

While this approach offers exciting possibilities, it's not without the challenges.

Pros

  • Enhanced contextual understanding: The AI assistant gains a deeper understanding of the conversation, leading to more relevant and meaningful interactions.
  • Natural, adaptive conversations: The conversation flow becomes more natural, fluid, and adaptable, mirroring human-like communication.
  • Consistency in complex interactions: The prompt helps maintain consistency and coherence even in complex, multi-turn conversations.
  • Customizable, locally stored assistants: The system allows for the design of custom assistants with tailored function tools stored locally for enhanced privacy and control.
  • Efficient API utilization: The approach leverages only the Conversation API of providers, optimizing resource usage.
  • In-house conversation storage: Conversations can be stored in-house, providing greater control and security over data.

Cons

  • Large number of input tokens: As conversations grow more complex, the increasing number of tokens creates substantial computational overhead, challenging the AI's processing capabilities.
  • Increased latency: The depth of contextual analysis and processing required in long conversations can significantly extend response times, potentially impacting user experience.

Conclusion

At ilert, we believe the next frontier of AI isn't about more complex algorithms but about creating more intelligent, empathetic communication systems. Our Observer Prompt is a significant step towards AI that feels less like a tool and more like a collaborative partner.

Engineering

Event Transparency: Enterprise Scale Alert Debugging with ilert’s Event Explorer

At ilert, one of the key tools in our debugging process is the Event Explorer, which provides an extensive overview of incoming events and their processing lifecycle. In this article, I will explain more about the capabilities of Event Explorer.

Tim Nguyen Van
Dec 12, 2024 • 5 min read

At ilert, one of the key tools in our debugging process is the Event Explorer, which provides an extensive overview of incoming events and their processing lifecycle. By reflecting the event process of an alert source, the Event Explorer allows our team to trace event paths, correlate related data, and identify issues quickly. This type of debugging, focused on event transparency, helps us quickly investigate the root cause and resolve issues, ensuring the ilert platform's functionality, stability, and reliability.

In this article, I will explain more about the capabilities of Event Explorer.

The challenges of debugging without event transparency

Debugging in a large-scale system becomes significantly more challenging when system events are not fully transparent or easily accessible. In our platform, events can be spread across various components and systems, making it difficult to maintain a clear, unified view of what is happening. Some of the main difficulties include:

  • Fragmented Data. Event logs scattered across services make it hard to get the full picture.
  • Time-Consuming Correlation. Manually linking events slows down the troubleshooting process.
  • Missed Context. Without a unified view, important information could be missed, therefore complicating resolution.

We faced these challenges, particularly when customers reached out with specific edge cases related to alert sources that had never been considered before.

Event Explorer capabilities

The Event Explorer is available for all alert sources and shows what happened with incoming raw events as they were processed into alerts. We developed it to help customers gain precise clarity, troubleshoot event-related issues on our platform more efficiently, and empower our support team to assist effectively when customers reach out regarding unexplainable anomalies. 

ilert Event Explorer returns full information about the incoming request, including event headers and payload. If an error occurs while processing, it displays the error information. If successful, the correlated converted event is displayed as an ilert alert. It also gives information about events being converted, for example, if it got appended due to alert grouping settings.

ilert event explorer


Here is a real-life scenario in which the Event Explorer came into action:

A customer contacted us because they hadn't received any notifications while testing our Nagios integration. When we asked for an alert ID to check our logs, they replied that no alerts had been created, which pointed to an issue in the event processing. Using the ilert Event Explorer, we discovered that the incoming request's payload missed the necessary keys and values for Nagios event conversion in ilert. It appeared that the enable_environment_macros macro in their Nagios configuration was disabled, preventing access to those variables. After enabling this macro, the customer started receiving alerts and notifications.

ilert event explorer

From request to Event Explorer: Tracing the journey of an Event

When an incoming request is sent to AWS ELB, a Lambda function validates the request and publishes a message to an SNS topic, which then delivers it to SQS queues. From there, another Lambda function consumes the message and stores the request information in Google BigQuery. Meanwhile, the event is processed by an EC2 instance, which converts it into an alert in ilert. The ilert Event Explorer then retrieves correlated request information from Google BigQuery.

Conclusion

At ilert, we believe that event transparency is important for simplifying debugging and improving system stability. One of our primary use cases for event transparency is the Event Explorer, which reflects the event processing of an alert source by offering detailed insight into how raw events are converted. The Event Explorer offers both our clients and us an overview of incoming events, enabling quick tracing, understanding, and resolution of anomalies.

Product

Honeybadger and ilert: Native integration

Combine real-time actionable alerting with full-stack monitoring 

Daria Yankevich
Dec 11, 2024 • 5 min read

We are excited to announce a native integration between ilert and Honeybadger. 

About Honeybadger

Honeybadger is a full-stack monitoring solution that helps developers deliver high-quality and resilient products. It helps identify, diagnose, and resolve issues in production environments with minimal effort, empowering teams to build robust, error-free applications.

Key Features of Honeybadger

  • Error monitoring: Tracks and alerts on errors in real-time, enabling rapid identification of critical issues.
  • Uptime monitoring: Monitors web applications and APIs to ensure optimal availability.
  • Logging and observability: Provides insights into errors, application logs, and other event streams with a query language and flexible visualizations.
  • Cron and heartbeat monitoring: Checks critical tasks so that silent failures are never missed. 

Honeybadger and ilert

Honeybadger eliminates the guesswork in error resolution, while ilert alerts engineers and helps them always be aware of critical issues. Here are the core features of this native integration. 

1. Real-time actionable alerting. Errors and performance issues detected by Honeybadger are instantly sent to ilert, ensuring your team is alerted in real-time. Alerts can be delivered through various channels, including SMS, email, phone calls, and messaging platforms like Slack and Microsoft Teams.

2. Intelligent alert grouping. ilert groups similar alerts from Honeybadger, reducing noise and allowing your team to focus on resolving the root cause. This ensures better prioritization and prevents alert fatigue.

3. Automated on-call duty management. With ilert's automated on-call scheduling, you can ensure the right team members are alerted at the right time. The system rotates shifts and escalates incidents automatically if the primary responder is unavailable.

4. Lower MTTA and MTTR. The integration empowers teams to minimize Mean Time to Acknowledge (MTTA) and Mean Time to Resolve (MTTR), significantly improving system reliability and customer satisfaction.

Try it out

Setting up the Honeybadger-ilert integration is simple and fast. Visit our integration guide for step-by-step instructions and start leveraging the full power of both platforms.

With ilert and Honeybadger, you can confidently deliver high-quality software while maintaining peace of mind.

Ready to elevate your incident management?
Start for free
Our Cookie Policy
We use cookies to improve your experience, analyze site traffic and for marketing. Learn more in our Privacy Policy.
Open Preferences
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.