ilert seamlessly connects with your tools using out pre-built integrations or via email. ilert integrates with monitoring, ticketing, chat, and collaboration tools.
We have transformed our incident management process with ilert. Our platform is intuitive, reliable, and has greatly improved our team's response time.
ilert has helped Ingka significantly reduce both MTTR & MTTA over the last 3 years, the collaboration with the team at ilert is what makes the difference. ilert has been top notch to address even the smallest needs from Ingka and have consistently delivered on the product roadmap. This has inspired the confidence of our consumers making us a 'go to' for all on call management & status pages.
Karan Honavar
Engineering Manager at IKEA
ilert is a low maintenance solution, it simply delivers [...] as a result, the mental load has gone.
Tim Dauer
VP Tech
We even recommend ilert to our own customers.
Maximilian Krieg
Leader Of Managed Network & Security
We are using ilert to fix our problems sooner than our customers are realizing them. ilert gives our engineering and operations teams the confidence that we will react in time.
Dr. Robert Zores
Chief Technology Officer
ilert has proven to be a reliable and stable solution. Support for the very minor issues that occured within seven years has been outstanding and more than 7,000 incidents have been handled via ilert.
Stefan Hierlmeier
Service Delivery Manager
The overall experience is actually absolutely great and I'm very happy that we decided to use this product and your services.
Timo Manuel Junge
Head Of Microsoft Systems & Services
The easy integration of alert sources and the reliability of the alerts convinced us. The app offers our employees an easy way to respond to incidents.
I studied web and app development at bib International College. At ilert, I'm constantly learning and growing by working on projects that improve our platform. I’d like to share my journey of joining the ilert team, the first challenges I’ve encountered, and some advice I’d offer to other aspiring junior developers or frontend developers taking their first steps in the tech industry.
How Do You Get Your First Job as a Frontend Developer?
It all started when my college teacher suggested that I consider an internship at ilert. I decided to apply and was lucky enough to be accepted. During my internship, I worked on developing a dashboard — a customizable page that allows clients to see team-related metrics and gain insights into different aspects of the incident management process. This hands-on experience was great for me because it was the first time I worked on such a big project in a real-world scenario and with many new technologies:
- React: A JavaScript library for building user interfaces that allowed us to create reusable UI components.
- TypeScript: A typed superset of JavaScript that adds static types, making code more robust and easier to debug.
- Material UI (MUI): A popular React UI framework that implements Google's Material Design and provides pre-built, customizable components.
- MobX: A state management library that simplifies managing and updating the state in React applications.
After my internship, I continued working on the dashboard as a working student and later as a junior developer, helping launch its first version for customers. Now, I fully own the dashboard feature (under the supervision of my senior colleagues, of course), overseeing its functionality and resolving issues. While it's a huge responsibility, I'm grateful that since day one, I’ve had my project that constantly evolves.
First challenge
Now, a few words on challenges. One of the biggest issues I faced was implementing drag-and-drop and resizing features for the new dashboard version. Finding a suitable package was difficult, but after testing many options, I settled on react-grid-layout.
Progress was smooth at first until I got stuck with a nasty bug: the widgets started mysteriously snapping to the center of the screen. I combed through the code, trying everything I knew but couldn’t find the issue. After hours of frustration, I reached out to a teammate, who offered simple advice: ‘Follow the flow of the code.’ With that nudge, I soon discovered the culprit—I'd swapped numbers in the widget position calculations, pushing the widgets outside the dashboard boundaries and causing them to default back to the center.
Fixing this small error instantly solved the problem, and I gained some valuable insights. I realized that small tips can be incredibly helpful when you're stuck. It also showed me that spending too much time fixated on a bug can lead to tunnel vision, causing you to overlook simple mistakes. Sometimes, it's best to step away, rest, and revisit the issue the next day with fresh eyes.
What Should Junior Developers Focus on When Starting Their Careers?
Reflecting on my journey, I'd like to share some tips for those starting out as junior developers, front-end developers or working students:
Combine React's component-based architecture with MUI's styled-components to create reusable and readable UI elements. For instance, we use an extensive collection of shared React components, such as tables and selects. We also use the MUI theme, which uses spacing, palette, and typography to ensure consistent styling across all components. These shared components and themes make coding much faster and ensure consistency across the app.
Ask questions wisely! Of course, asking questions is essential, but if you're stuck, try to solve problems yourself first. Research and try different solutions before asking for help. This approach not only improves your problem-solving skills but also demonstrates your commitment to learning. Take the initiative to learn and understand the technologies you're working with. Don't hesitate to explore documentation, tutorials, and other resources.
Use efficient state management in your React applications e.g. MobX. For example, we use the store pattern, maintaining shared stores primarily for our API calls and a page store to combine those API calls while adding UI logic. This approach simplifies state management and keeps your code organized.
The transition to full-time work can be stressful. Stay resilient, and remember that challenges are opportunities for growth.
If you’re a new frontend developer or junior developer and ready to start your career, explore the ilert career page to see if there’s a role that suits you. Good luck on your journey!
ilert's CTO, Christian Fröhlingsdorf: "We are always welcoming engineers who are at the beginning of their careers and are eager to grow and evolve as developers. There is no magic pill or fast track to becoming an experienced engineer. In my experience, the only functional approach to growing is to have a real product area or features and take responsibility for it. It's one thing to work on something that may never reach production and another to receive customer feedback on a daily basis."
Read how we set up an experimental chatbot environment that allows us to switch LLMs dynamically and enhances the predictability of AI-assisted features' behavior within the ilert platform. The article includes a guide on how you can build something similar if you plan to add AI features with a chatbot interface to your product.
Why use an AI chatbot playground?
At ilert, we are integrating various AI features that utilize a chat interface, including on-call scheduling and AI-powered assistance for our technical customer support team. To streamline development and maximize efficiency, we have created an AI chatbot playground that allows us to switch between different large language models. This AI playground serves as a tool for instructing models to respond within specific parameters, enabling us to predict model behaviors, debug issues, and test new features and enhancements.
We recently showcased this playground at our meetup, allowing attendees to experience how we approach AI feature development. The AI chatbot playground served as an effective demonstration tool, enabling attendees to test it themselves and learn how to apply similar techniques in their own work. It has become essential to our AI feature development process, allowing us to iterate quickly and optimize our models based on real-world feedback and a variety of testing scenarios.
Tips on how to build your own AI chatbot playground
AI Playground Setup
To set up our AI playground, we’ll use Vite. It offers a quick step-by-step setup of a repository via CLI.
bun create vite
Following the prompts, you may now choose your project name, framework, and variant (language + transpiler).
Building the UI
The following is one possible approach to building your chatbot’s UI. Feel free to get creative and put your own spin on it.
Building an AI playground should roughly have the following structure with components:
Headerbar
Logo
Heading
Action buttons
Sidebar
Instructions
Model
Chat
Messages
Input box
Components
Earlier in our setup, we chose React as our framework with Typescript, while MUI serves as our component library.
Headerbar
The headerbar contains a logo, heading, and action buttons.
Our sidebar wraps components and provides the user with the ability to choose and tune a model. Instructions should be a text input field, while the Model should be a dropdown with select functionality.
Here’s a snapshot of our finished product, highlighting the components mentioned above:
Going forward
Building your own AI playground might take some time, but the reward is great, given its endless use cases. You can choose your own data sources, whether they are cloud providers like OpenAI or self-hosted open-source models.
To further expand on this idea, you could create an export function or bundle the app into different platforms using tools like Electron for desktop applications. Other sensible features would be a theme mode toggle, authorization input (for ex., API keys when using third-party providers), or providing a language switch.
Building an AI proxy for our AI features was one of the best decisions we made a year ago. In this article, we will share why and what challenges we faced.
Reasons why we created an AI proxy
Narrow, too narrow context
This journey began in 2023 when we only started implementing AI features into ilert. Back then, the capabilities of ChatGPT were impressive but still far from the capabilities available now, at the end of 2024. At that time, generative AI was just beginning to prove its value in business applications, with early adopters exploring potential use cases in customer service, content creation, and data analysis. The initial version of ChatGPT could deliver meaningful insights and streamline some workflows, but it had limitations in handling complex, domain, and context-specific queries.
The context was narrow — only 4000 tokens. Our first AI-backed feature was automatic incident communication creation. In short, this feature helps customers automatically create messaging for their status pages to inform users about IT incidents. We didn't want to limit ourselves to just an intelligently crafted announcement but to include information on affected services and automatically identify and change those services' status from operational to outage. Our clients could have thousands of services in their accounts, and only a few are affected by an incident that we had to identify automatically. We also have clients in China and India, where words used to name services can be long. So, 4000 tokens weren't enough.
Can I get my JSON back, please?
JSON-formatted data is commonly used for machine-to-machine communication. It ensures that data can be easily parsed, validated, and transmitted across our applications, maintaining consistency and reducing the likelihood of data handling errors. However, like many others, we encountered some challenges related to JSON handling with the early releases of GPT-3.
Those versions were designed primarily for conversational text generation rather than structured data output. This limitation meant that while ChatGPT could understand and generate JSON-like responses, it struggled with strict JSON format adherence. So, even if we could try and fit the query into 4000 tokens, the early models' responses would occasionally omit closing braces or add unexpected text, which disrupted downstream processes that required strictly valid JSON. Simply saying, the call was falling.
No agents and no functions
GPT agents, as we know them now, can break down complex problems into actionable steps, prioritize tasks, and even chain responses together to achieve a goal. Without these capabilities, we had to rely on static prompt engineering, where each interaction with the AI was isolated and required precise prompting to achieve even moderately complex outcomes. This absence made it challenging to build features that required decision-making based on prior context or that needed to adapt dynamically to user inputs. Taking AI-assisted on-call scheduling creation as an example, we feed context-specific data to receive a feasible and flexible calendar for further usage.
Functions enable the model to go beyond simple text generation by directly executing specific, pre-defined actions within a system. They enable the AI to interact with external systems or databases, retrieve or update data based on user input. Functions allow the AI to directly interact with ilert’s API, enabling tasks like retrieving ilert-related context data. This functionality transforms the AI from a passive responder into an active, task-oriented assistant that can autonomously handle complex workflows. Now. It's hard to believe, but there were no functions two years ago.
Last but not least, we wanted to use different LLM providers, like AWS Bedrock and Azure GPT4. As we have many customers in the EU, we couldn't limit ourselves to American OpenAI API only. The absence of native support for our operational requirements led us to the concept of an AI proxy, a middle layer to manage requests and responses across AI models and ensure each interaction met ilert's performance standards.
Problems we resolve with the custom proxy
Logging, monitoring, and saving
Instead of sending AI requests straight to OpenAI or other model providers (and paying tolls every time), we funnel everything through our custom AI proxy. This way, whether you’re preparing a message on an incident for your clients, setting up schedules, or assembling a post-mortem document, our AI requests go through a one-stop shop where we handle all the behind-the-scenes stuff—logging, monitoring, and, yes, keeping an eye on those precious tokens and costs.
By tracking token usage and other cost metrics, the AI proxy lets us avoid unpleasant surprises on the billing side, and even better, we capture everything that goes in and out. This means we can use the data to fine-tune our models, helping the AI improve with every interaction. We also log performance data for different model versions, enabling us to assess each model’s effectiveness, response times, and accuracy under real-world conditions. Additionally, we track performance data based on the feedback from our customers to specific use cases and their related models so we know if a use case performs better or worse on different LLM models.
A significant advantage of our AI proxy is that it enables us to switch between different large language models on the fly, which is critical for ilert’s European customers who prioritize data localization. Many of our clients require on-premise models or cloud solutions that meet stringent data residency requirements, such as AWS Bedrock operating within specific regions like Frankfurt or Stockholm. By storing conversation threads and session histories locally and only for the lifetime of the conversation in our thread store, we can dynamically reroute requests between providers like Azure’s GPT-4 and AWS Bedrock without losing context. Circuit breakers within the AI proxy monitor response times and model consistency, automatically rerouting traffic to maintain seamless user experience and reliability when models encounter high demand or slowdowns on specific providers.