Integrations · · 3 min read

Using LangChain with Palico

Using LangChain with Palico

Palico gives you complete flexibility when it comes to building your LLM application, and as such, we don't provide too many building blocks when it comes to implementing the core of your application. This is where you can use libraries like LangChain, which does come with a set of building blocks, to help you get up and running quickly.

Common building blocks from LangChain

Here are some common building blocks developers generally uses from LangChain when building their application

Connecting to LLM Models

Building an LLM Application involves being able to use different LLM models to find one that works well for your use-case. LangChain provides a common interface that you can use to communicate with any LLM models. Here are some examples:

import { initChatModel } from "langchain/chat_models/universal";

// OpenAI
const gpt4o = await initChatModel("gpt-4o", {
  modelProvider: "openai",
  temperature: 0,
});
await gpt4o.invoke("what's your name")

// Anthropic
const claudeOpus = await initChatModel("claude-3-opus-20240229", {
  modelProvider: "anthropic",
  temperature: 0,
});
claudeOpus.invoke("what's your name")

// Google Vertex
const gemini15 = await initChatModel("gemini-1.5-pro", {
  modelProvider: "google-vertexai",
  temperature: 0,
});

await gemini15.invoke("what's your name")

Prompt Templates

One of the core part of your building a performant LLM application is your Prompt. As such, prompt generation logic tends to be very complicated. LangChain provides a Prompt Template utility to help you create more complex prompts. Here's an example:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const promptTemplate = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant"],
  ["user", "Tell me a joke about {topic}"],
]);

await promptTemplate.invoke({ topic: "cats" });

Managing your RAG Layer

One way we can improve your prompt is to provide it with more context. To provide relevant context, we use RAG. RAG has an indexing step, and a retrieval step. LangChain provides various libraries that helps you ingest data into your Vector DB, and retrieve appropriate context when you are building your prompt.

Indexing

LangChain you populate your Vector Database with documents through various libraries. This involves:

  • Loading and splitting documents
  • Vectorizing document chunks
  • Publishing to your Vector DB

Read more about indexing here

Retrieval

LangChain provides you with helper utility class to help you retrieve relevant content from your Vector Database. Here's an example

const retriever = vectorStore.asRetriever({ k: 6, searchType: "similarity" });

const retrievedDocs = await retriever.invoke(
  "What are the approaches to task decomposition?"
);

Read more here

Using LangChain with Palico Effectively

LangChain provides lots of utility libraries to help you the core implementation of your LLM Application whereas Palico provides the overall architecture. You can build powerful LLM application by combining the two. For example, you can build the core of your application with LangChain, then use Palico to preview changes, run experiments, add tracing, and publish your application behind a REST API.

Example of Using LangChain with Palico

Here's an example of a Palico application that uses LangChain to help us communicate with the OpenAI model.

import {
  ConversationContext,
  ConversationRequestContent,
  Agent,
  AgentResponse,
} from "@palico-ai/app";
// highlight-start
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
// highlight-end

export default class ChatbotAgent implements Agent {
  async chat(
    content: ConversationRequestContent,
    context: ConversationContext
  ): Promise<AgentResponse> {
    const { userMessage } = content;
    if (!userMessage) throw new Error("User message is required");
    // highlight-start
    const model = new ChatOpenAI({
      model: "gpt-4o-mini",
      temperature: 0,
    });
    const response = await model.invoke([
      new HumanMessage({ content: userMessage }),
    ]);
    // highlight-end
    return {
      message: response.content as string,
    };
  }
}

Taking the Next Step

The nature of LLM Development is that you have to continuously test out different ideas (prompt generation logic, models, call chaining, etc). This requires a tech-stack that lets you:

  • Quickly make changes and preview them
  • Run, manage, and analyze experiments
  • Debug issues with comprehensive logging and tracing
  • Easily deploying to production
  • Integrate with frontend or other services

Palico provides you with an integrated tech-stack that helps you quickly iterate on your LLM Development. Learn more at https://www.palico.ai/