How To Bring Your Own LLM With LLM Open Connector
Science & Technology
Introduction
One of the powerful features of Agent Force is the ability for developers to extend its capabilities by integrating their own large language models (LLMs). The Model Builder facilitates connections to popular models like GPT, Claude, or Gemini hosted on AWS, GCP, or Azure. However, what if you want to use a model hosted on a different platform? This is where the new LLM Open Connector comes into play.
In this article, we'll explore how to incorporate any LLM from your preferred provider into Salesforce using the LLM Open Connector. Let's dive in!
What is LLM Open Connector?
The LLM Open Connector is an API specification available on GitHub that model providers can follow to integrate their APIs with the Model Builder. The specification currently supports chat completions, which is a standard feature across various models and providers. With this standardization, developers can create conversational AI experiences by sending messages to an AI, receiving responses, and enabling natural dialogue.
The connector's request and response structure adheres to the specifications, allowing you to connect your models seamlessly. If a provider does not conform to this specification, you can create a middleware endpoint that translates the requests and responses to adhere to the LLM Open Connector standard.
Building a Middleware Endpoint
As an example, let's consider Hugging Face, a popular platform for building large language models. While it supports chat completions, its API structure can differ from the LLM Open Connector specification. The first difference is in authentication; for instance, Hugging Face uses basic authentication, while the LLM Open Connector uses key-based authentication. The request and response structures also differ.
As a developer, you can create a middleware application using any programming language or framework that conforms to the open LLM specification. This middleware will interact with the Hugging Face API, transforming incoming requests and responses accordingly.
Here’s a brief overview of the process:
- Set Up the Middleware Application: This could be a simple Node.js application where you set up an endpoint.
- Define Request and Response Translations: Convert the incoming requests to the required format of Hugging Face and then transform back to the standard LLM Open Connector format.
- Deploy the Application: After the endpoint is set up, deploy the application (such as on Heroku) so it's accessible.
Connecting Your Model to Model Builder
Once your middleware endpoint is ready, you can incorporate your custom model into the Model Builder in Salesforce.
- Access Model Builder: In the Model Builder UI, click on "Add Foundation Model."
- Enter Model Details: Input your model's name (e.g., Mixol), the URL of your middleware endpoint, and your authentication key. Optionally, set the model name and token limits.
- Test the Connection: The system will test the connection with a sample prompt.
- Configuration: After a successful connection, you can adjust capabilities and temperature settings, ensuring to test the configuration using a simple prompt.
This integrates your custom model into Salesforce and makes it available for further use.
Utilizing Your Custom Model
You can now use your custom model for various applications within Salesforce. For instance, you can test prompts directly in the Prompt Builder by interacting with your Mixol model, ensuring it complies with expected input-output behaviors.
Moreover, using the Agent Builder, you can create custom actions that leverage this model, all while benefiting from Salesforce's robust Einstein Trust Layer security, ensuring data protection throughout the usage of your custom LLM.
To help developers get started, an open-source repository is available that contains the necessary code snippets and examples related to building endpoints and configurations. Additionally, a comprehensive blog post walks through all the steps required to integrate your custom model into Salesforce.
Conclusion
In summary, the LLM Open Connector offers a valuable pathway for developers to integrate their chosen models into Salesforce using Middleware for non-compliant APIs. You can access all security features of the Einstein Trust Layer and benefit from the platform's native capabilities.
Keywords
- LLM Open Connector
- API Specification
- Middleware Application
- Hugging Face
- Salesforce
- Model Builder
- Custom Model
- Einstein Trust Layer
FAQ
Q1: What is the LLM Open Connector?
A1: The LLM Open Connector is an API specification that allows developers to integrate various LLMs into Salesforce's Model Builder.
Q2: How can I use a model that doesn’t follow the LLM specification?
A2: You can create a middleware application that converts requests and responses to meet the LLM Open Connector standards.
Q3: What types of models can I integrate using this connector?
A3: You can integrate models hosted on various platforms, as long as you follow the API specification.
Q4: Is there any security provision while using custom models?
A4: Yes, all interactions with custom models are protected by the Einstein Trust Layer, ensuring data security and compliance.
Q5: Where can I find resources for building a middleware or integrating custom models?
A5: An open-source repository is available that provides code samples, recipes, and a detailed blog post with configuration steps.