Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Introduction

General information

The OpenAI GPT MT integration is a paid service that is not covered by a standard XTM Cloud subscription.

XTM Cloud integrates with OpenAI GPT models through the so-called OpenAI connector. Since the integration provides the functionality of machine translation, OpenAI is therefore treated as an MT engine.

Everyone has access to this feature and can configure the integration to OpenAI by storing their own OpenAI API key in the XTM Cloud global configuration.

OpenAI GPT MT operates on a pay-as-you-go model, with pricing based on actual usage (e.g., tokens consumed). To use the API, you need to create an account on the OpenAI API platform and generate an API key.

Leveraging against MT

For now, OpenAI GPT MT only works on a segment-to-segment basis, which means that you need to enter XTM Workbench and a particular segment in the edit mode, to obtain an MT translation. XTM Cloud sends the text for MT matching only during an active XTM Workbench session. For this reason, in the XTM Cloud UI global configuration, when it comes to the Send text for matching parameter, the After analysis and After analysis and in XTM Workbench options are permanently inactive.

In XTM Workbench, OpenAI GPT MT matches are labeled as OpenAI GPT MT.

Target languages

OpenAI GPT MT allows for deciding which target languages will be matched against MT within the framework of a single XTM Cloud project. You can enable or disable it for a specific language pair in the project’s settings (or project template’s), in Project Editor → General info → Machine translation → Matching for language pair → Choose….

ChatGPT Plus

Sometimes you might mistakenly consider ChatGPT Plus as yet another OpenAI GPT MT engine variant to purchase. Quite the contrary, ChatGPT Plus and OpenAI API are two distinct platforms, and purchasing one does not affect the quota or access for the other.

ChatGPT Plus is just an enhanced subscription plan for using ChatGPT on OpenAI's chat platform ChatGPT, which utilizes OpenAI models. It provides enhanced performance for interacting with ChatGPT directly, such as faster response times and priority access during peak usage. However, it does not include access to the OpenAI API.


Configuration

Global settings

  1. In general, all the configuration is performed in the XTM Cloud UI. In Configuration → Settings → Translation → Machine translation → MT engines, select OpenAI.

  1. The OpenAI GPT section will appear below.

  • Default setting → Select this option to specify whether this MT engine should automatically be enabled at XTM Cloud customer and project level. Project Managers need to clear the checkbox in the customer’s and project’s settings, to disable this MT engine at customer and project level respectively.

  • Connection way → Select this option to specify how your XTM Cloud instance should be connected to the OpenAI GPT MT engine:

OpenAI

Azure OpenAI

If you select this option, the API Key option is displayed.

It allows you to type in a unique API key for your OpenAI GPT integration, which means that you will need to get an appropriate key that will allow you to use OpenAI GPT via API. To use the OpenAI API, you need to sign up and obtain that API key at OpenAI Platform. You can also monitor your usage and billing information there.

IMPORTANT!

With the OpenAI integration, you are only limited to using the GPT-4o mini model, which is the default model for clients using direct GPT integration.

If you select this option, it means that you are going to connect to the OpenAI GPT MT engine via Microsoft Azure. Then, the following settings are displayed.

  • Azure API key → It allows you to type in a unique Azure API Key for your OpenAI GPT MT service. You will need to find it in your Microsoft Azure platform.

  • Endpoint → This is a URL for your Azure OpenAI Studio.

  • Deployment name → This is a deployment name for OpenAI GPT MT service that has been specified during the deployment of a model, in Azure OpenAI Studio.

IMPORTANT!

With the Azure OpenAI integration, you are not limited to using GPT-4o mini, but you can also use other models, such as o1, 4o, GPT-4, etc. You can choose the model by specifying it through the Deployment name.

  • Send text for matching → This setting specifies the stage at which content is to be sent to the MT engine. As was mentioned before, the After analysis and After analysis and in XTM Workbench options are permanently inactive since XTM Cloud sends the text for MT matching only during an active XTM Workbench session.

  • Use auto inline tag placement → This feature uses the XTM NLP framework for the automatic placement of inline tags for machine-translated text. After receiving translation from the MT system, the auto-insert mechanism works for all matches received from the MT engine. Similarly to the Send text for matching setting, inline tags are not processed by GPT, so the Use auto inline tag placement option is always enabled and cannot be changed.

Customer settings

XTM Cloud offers setting up different OpenAI access credentials for different XTM Cloud customers than those set up in global configuration. For example, take a look at the following global configuration:

Now, take a look at the configuration for one of XTM Cloud customers [Customers → Customer list → (select a relevant customer) → Settings → Machine translation].

One example of how this can be beneficial is in gaining better control over costs. By using separate API keys, translations created for different XTM Cloud customers will be billed separately on the OpenAI side. This approach allows for keeping track of which customer is utilizing more translations, which helps to manage expenses more effectively.

Similarly, you can also set up different credentials for Azure OpenAI that, as you know, will allow you to utilize different OpenAI GPT models, if you would like to achieve different translation effects depending on the XTM Cloud customer.


Does XTM Cloud allow for training translation models?

Training models through XTM Cloud is not possible. Our integration only allows us to query the model for translation and we are unable to change it in any way permanently.

Keep in mind that communication with the OpenAI model is underlied by the same principles as other MT engines. A special prompt is sent to the model, which makes the model respond with a translation into a set language. This is all done on the programming level. Inside XTM Cloud we do not provide any tools resembling ChatGPT, where a user can exchange messages with a model.

The main goal of the integration is to use OpenAI's capabilities in translation and not necessarily integrate generative AI with the XTM Cloud system.


XTM AI SmartContext

XTM AI SmartContext is an additional option of OpenAI GPT MT.

When active, the MT engine will look for a Fuzzy Match from the translation memory resources for the source segment that is currently being translated. If a Fuzzy Match of the required score is found (at least 75% of concordance), it is passed to the OpenAI GPT MT service an as additional context for translation. XTM Cloud handles preparation of the prompt and all the communication with a GPT model. In other words, XTM Cloud sends the highest available TM match coming from the translation memory to OpenAI GPT to improve the quality of an MT match.

This technique has been proven to boost the said translation quality significantly. The GPT model queried in this way treats the syntax and vocabulary of the Fuzzy Match as a pattern and produces a consistent translation of the source segment.

The XTM AI SmartContext feature is available only in selected subscription plans. To discuss how you can access this feature, make sure to contact your dedicated XTM Sales or Customer Success representative.

  • No labels