AI/ML

Post Image

Revisiting Data Science Model Development in the Era of LLMs

In the ever-evolving landscape of Data Science (DS) and Artificial Intelligence (AI), advanced models like GPT-3.5/4, Gemini, and LLaMa have revolutionized the approach to solving complex tasks.Over the past year, various implementations and research have leveraged Large Language Models (LLMs) in innovative ways. For instance, an article on Unite AI [2] highlights healthcare-based applications through LLM integration into clinical workflows to assist in patient data gathering and analysis, significantly improving diagnostic accuracy and patient care. Advanced models like LLaVA-Med can...

Read
Post Image

The Arrival of AI Agents

In recent years, we've seen significant advancements in Large Language Models (LLMs), including improvements in model size, accuracy, and contextual comprehension across various domains. Built on the architecture of Generative Pre-trained Transformers (GPT), these models have enhanced data generation capabilities and a broad range of domain applications. While LLMs have shown impressive intelligence, they still come with limitations, such as hallucinations (producing incorrect or fabricated responses) and an inability to handle proprietary or personal data without additional integration. To address these...

Read
Post Image

Cursor AI: A Game-changer in Sahaj's Dev Process

In the constantly changing software development landscape, the use of AI tools is a game-changer. Sahaj's team was tasked to port an Angular codebase to React in an efficient manner without sacrificing quality. While migrating this, we strive to leverage generative AI tools to accelerate the development process and reduce manual effort, which will assist in streamlining tasks and increasing efficiency and productivity during this. We tried out a few options like GitHub Copilot and Bolt, we landed on Cursor...

Read
Post Image

Testing RAG Pipelines

In today's world, where Retrieval-Augmented Generation (RAG) pipelines are increasingly integrated into applications, it's more important than ever to thoroughly test these systems.Large Language Models (LLMs) are powerful but not perfect. They can hallucinate—generating information that isn’t in the context—and sometimes provide answers that are irrelevant to the user’s query. These issues, or "smells," can harm the reliability of your application, making it essential to thoroughly test your LLM before and after deployment.As open-source LLMs continue to surge in popularity,...

Read
Post Image

Query Decomposition: Understanding the User's Perspective

While searching for any information on the Internet, it's important to determine whether the results being fetched are relevant to what is being searched for. Most of us use search engines like Google, Bing, Yahoo!, or DuckDuckGo to find answers, but do these search engines always return the same results? Interestingly enough, that is not the case. Let's uncover the mystery behind why the same search term can yield different results across different platforms.For example, if the user is searching...

Read

Exploring LoRA - Part 2: Analyzing LoRA through its Implementation on an MLP

Source: ChatGPT+ Part 1 delves into the concept and necessity of fine-tuning pre-trained large models for specialized tasks. It introduces the conventional method of fine-tuning, where only the top layers of the model are adjusted, and highlights its limitations, particularly in terms of computational and storage demands. To address these challenges, the article shifts focus to Parameter-Efficient Fine-Tuning (PEFT) methods, specifically the use of adapter modules, as proposed by Houlsby and colleagues. These adapters are small, inserted layers that allow for...

Read

Exploring LoRA — Part 1: The Idea Behind Parameter Efficient Fine-Tuning and LoRA

Source: ChatGPT+ Pre-trained large language models undergo extensive training on vast data from the internet, resulting in exceptional performance across a broad spectrum of tasks. Nonetheless, in most real-world scenarios, there arises a necessity for the model to possess expertise in a particular, specialized domain. Numerous applications in the fields of natural language processing and computer vision rely on the adaptation of a single large-scale, pre-trained language model for multiple downstream applications. This adaptation process is typically achieved through a...

Read
Post Image

Soaring to New Heights: AI, Machine Learning, and Data Strategy in the Airline Industry

Written by Ravindrababu T and mckimmer In the fiercely competitive airline industry, the race to deliver a frictionless passenger experience is on. Air travel, once merely about getting from A to B, is now a journey laden with potential touch-points where airlines can either win loyalty or lose business. The key to success lies in leveraging machine learning, artificial intelligence (AI), and a robust data strategy to not only streamline this journey but also to unlock avenues for increased revenue. The application of...

Read
Post Image

Data-Driven Customer Insights: A New Altitude for Airline Revenue Management and Retail Pricing

The aviation industry, with its dynamic pricing and complex service delivery, presents a fertile ground for AI driven data insights to revolutionize customer experience and revenue management. Understanding customer behavior through meticulous data mining is not just a competitive advantage; it’s a strategic imperative that is increasingly separating the successful airlines from those struggling to achieve profitability. The crux of the matter lies in mining and interpreting the vast array of customer data at each touchpoint — from booking a...

Read

That state-of-the-art LLM embedding may not work for your use case — here is why

With LLM embedding-based applications, it’s quite easy to understand and apply semantic meaning in machine learning algorithms. For example, you can use text embedding to feed to a classifier for rating user sentiment on user feedback or use it for RAG-based semantic search, etc. We understand words by mapping them in different dimensions. For example, relative terms such as good, better and best or low carb vs high carb food. If we represent the word by rating these values in...

Read