Platform Connection Model Management
Prerequisites: Understand Teammately-provided Models​
Teammately offers several models as Test inference models. These models are available for use during the creation process of Generation Flows.
However, once you move to the production phase and start using the API, these models are no longer considered "Test inference". At this stage, you need to contract with other Model Providers and use the API keys you've registered with them. This page describes how to connect your AI platform and manage platform connections.
Platform Connections for Production Use​
When you use models that Teammately doesn't support, you need to setup "Platform Connection" to run any model. Once you establish a connection to your preferred platform, you can use any models under the platform.
Before introducing how to setup Platform Connection, let's see the basic structure of model access management.
Basic structure of model access management​
What is Model?​
Model indicates AI model developed by the provider, including but not limited to, generative AIs for text, image and speech, or embedding models, or any other AI/ML models to work for specific tasks.
The unit here is the smallest within this structure to manage; for the GPT series by OpenAI for example, the models are gpt-4, gpt-4-32k, gpt-4-0613, gpt-4-32k-0613, ... .
What is Provider?​
Provider refers to the organization or company that has developed and provides the Generative AI Models. They are responsible for the creation, maintenance, and updates of the model.
Examples: OpenAI, Meta, Google
What is Platform?​
Platform refers to the cloud computing service or environment where the Generative AI model is hosted and operated. As API call and connection is managed at Platform level, we manage API keys at this Platform level. One you make a connection to a Platform, you can access every provider and model under the Platform.
Examples: Azure OpenAI Service, Amazon Bedrock (AWS), Google Cloud, OpenAI API (as host)
On the platform side, you have to enable each model if required. For example. Amazon Bedrock requires to get approval for each model.
Connect your AI platform​
Follow these steps to connect your AI platform:
-
In the Settings menu, select the Models option from the sidebar.
-
Click the My Inference Platforms tab, then click the Connect my platform button.
-
In the popup, enter a connection name for internal identification.
-
Select your desired platform (e.g., OpenAI, Amazon Bedrock, Google AI Studio, etc.) and click Connect.
-
Enter the required API key for the selected platform.
- For example, if you select OpenAI, see the OpenAI API key page for details on obtaining your API key.
-
Click Connect to complete the setup. After setup, you can edit or remove a platform connection.
In one workspace, you can register several platform connections (such as OpenAI, Bedrock, etc.). This lets you switch between them as needed for different generation flows.
-
For security, do not reuse API keys that are already used for other applications or projects on your platform (e.g., OpenAI, Bedrock, etc.). Always generate a new API key for each integration.
-
Under our current version, once you delete a platform connection, all the generation flows (including public ones) may stop working. Be aware and switch to a different platform connection before you execute deletion.