Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Current »

Introduction

General information

The OpenAI GPT MT integration is a paid service that is not covered by a standard XTM Cloud subscription.

XTM Cloud integrates with OpenAI GPT models through the OpenAI connector. Since this integration provides machine translation functionality, OpenAI is therefore treated as an MT engine.

Everyone has access to this feature and can configure the integration to OpenAI by storing their own OpenAI API key in the XTM Cloud global configuration.

OpenAI GPT MT operates on a pay-as-you-go model, with pricing based on actual usage (e.g., tokens consumed). To use the API, you need to create an account on the OpenAI API platform and generate an API key.

Leveraging against MT

For now, OpenAI GPT MT only works on a segment-to-segment basis, which means that, to obtain an MT translation, you need to enter XTM Workbench and a particular segment in edit mode. XTM Cloud only sends the text for MT matching during an active XTM Workbench session. For this reason, in the Send text for matching parameter, in the XTM Cloud UI global configuration, the After analysis and After analysis and in XTM Workbench options are permanently inactive.

In XTM Workbench, OpenAI GPT MT matches are labeled as OpenAI GPT MT.

Target languages

When OpenAI GPT MT is used, the user can decide which target languages are to be matched against MT in a single XTM Cloud project. You can enable or disable matching for a specific language pair in the project’s (or project template’s) settings (Project Editor → General info → Machine translation → Matching for language pair → Choose….)

ChatGPT Plus

You might consider ChatGPT Plus to be just another OpenAI GPT MT engine variant that is available to purchase. On the contrary, ChatGPT Plus and OpenAI API are two different platforms, and purchasing one does not affect the quota or access for the other.

ChatGPT Plus is just an enhanced subscription plan for using ChatGPT on OpenAI's chat platform, ChatGPT, which utilizes OpenAI models. It provides enhanced performance, such as faster response times and priority access during peak usage, for interacting with ChatGPT directly. However, ChatGPT Plus does not include access to the OpenAI API.


Configuration

Global settings

  1. In general, all settings are configured in the XTM Cloud UI. Select Configuration → Settings → Translation → Machine translation → MT engines → OpenAI.

  1. The OpenAI GPT section will be displayed below the list of MT engines.

  • Default setting → Select this option to specify whether this MT engine should automatically be enabled at XTM Cloud customer and/or project level. At each of these levels, Project Managers must clear the checkbox in the settings to disable this MT engine.

  • Connection way → Select this option to specify how your XTM Cloud instance should connect to the OpenAI GPT MT engine:

OpenAI

Azure OpenAI

If you select this option, the API Key option is displayed.

You can then enter a unique API key for your OpenAI GPT integration. To do so, you must obtain an appropriate key that will enable you to use OpenAI GPT via API. To use the OpenAI API, you must sign up and obtain that API key on the OpenAI Platform. You can also monitor your usage and billing information there.

IMPORTANT!

With the OpenAI integration, you are limited to using the GPT-4o mini model, which is the default model for clients using direct GPT integration.

If you select this option, your XTM Cloud instance will connect to the OpenAI GPT MT engine via Microsoft Azure. The following settings are then displayed:

  • Azure API key → Enter a unique Azure API Key for your OpenAI GPT MT service. You will need to find it in your Microsoft Azure platform.

  • Endpoint → Enter the URL for your Azure OpenAI Studio.

  • Deployment name → Enter the deployment name for the OpenAI GPT MT service that has been specified during the deployment of a model, in Azure OpenAI Studio.

IMPORTANT!

With the Azure OpenAI integration, you are not limited to using GPT-4o mini: you can also use other models such as o1, 4o and GPT-4. You can choose the model by specifying it in the Deployment name setting.

  • Send text for matching → This setting specifies the stage at which content is to be sent to the MT engine. As mentioned earlier, the After analysis and After analysis and in XTM Workbench options are permanently inactive since XTM Cloud only sends text for MT matching during an active XTM Workbench session.

  • Use auto inline tag placement → This feature uses the XTM NLP framework for automatic placement of inline tags in machine-translated text. After translation is received from the MT system, the auto-insert mechanism is applied to all matches received from the MT engine. Similarly to the Send text for matching setting, inline tags are not processed by GPT, so the Use auto inline tag placement option is always enabled and cannot be changed.

Customer settings

In addition to the OpenAI access credentials that are configured in the global configuration, different OpenAI access credentials can also be configured for different XTM Cloud customers. Example global configuration:

Contrast this with the configuration for a particular XTM Cloud customer (Customers → Customer list → (select a relevant customer) → Settings → Machine translation):

One way that this can be beneficial is to provide better control over costs. When separate API keys aer used, translations created for different XTM Cloud customers are billed separately on the OpenAI side. This approach makes it possible to keep track of which customer use the OpenAI translation service more, which helps to manage expenses more effectively.

Similarly, to achieve different translation effects depending on the XTM Cloud customer involved, you can also configure different credentials for Azure OpenAI, enabling different OpenAI GPT models to be used, if required:


Can XTM Cloud be used to train translation models?

It is not possible to train translation models via XTM Cloud. Our integration only enables us to query a model, to generate translations, and we are unable to change it permanently in any way.

Keep in mind that communication with the OpenAI model underlies the same principles as other MT engines. A special prompt is sent to the model, and this makes the model respond by supplying a translation into a particular language. This is all done at programming level. In XTM Cloud itself, we do not provide any tools resembling ChatGPT, with which a user can exchange messages with a translation model.

The main goal of the integration is to use OpenAI's capabilities in translation and not necessarily to integrate generative AI in the XTM Cloud system.


XTM AI SmartContext

XTM AI SmartContext is an additional OpenAI GPT MT option.

When this option is active, the OpenAI GPT MT engine will look for a Fuzzy Match in the translation memory resources for the source segment that is currently being translated. If a Fuzzy Match with the required score is found (at least 75% concordance), it is passed to the OpenAI GPT MT service as an additional context, for translation. XTM Cloud handles the preparation of the prompt and all the communication with a GPT model. In other words, XTM Cloud sends the highest available TM match found in the translation memory to OpenAI GPT, to improve the quality of an MT match.

This technique has been proven to boost translation quality significantly. A GPT model that is queried in this way treats the syntax and vocabulary of the Fuzzy Match as a pattern and produces a consistent translation of the source segment.

The XTM AI SmartContext feature is only available in selected subscription plans. To discuss how you can access this feature, contact your dedicated XTM Sales or Customer Success representative.

  • No labels