LQA report – most common issues & troubleshooting

Introduction

This article helps troubleshoot issues involving the generation of LQA reports.

Before reading the article below, make sure you have a good understanding of the mechanism and the prerequisites for generating the Linguist (evaluative) LQA report, which are described in detail in this dedicated article: Two types of LQA report - description and use case!

Before reading the article below, make sure you have a good understanding of the mechanism and the prerequisites for generating the Linguist (evaluative) LQA report, which are described in detail in this dedicated article: Two types of LQA report - description and use case!


Issue description & Solution

Sometimes the LQA score in the UI and the score in the generated report differ by 0.01%

Solution: This is a well-known phenomenon but there is no plan to address it soon.

The LQA report does not generate at all

Solution: Such a situation might occur for a couple of reasons:

  1. Ensure that the Yes, save results in user record LQA option is enabled in the Workflow editor, for the step in which LQA evaluation is to be performed.

  1. Ensure that the LQA-preceding step is fully performed by the assigned linguist (LQA-related segments must be manually confirmed so that they have green status in XTM Workbench).

  1. Ensure that the very LQA step is fully performed by the assigned linguist (LQA-related segments must be manually confirmed so that they have green status in XTM Workbench).

  2. Ensure that both linguists (Translator and LQA reviewer) stay assigned to the relevant steps after finishing the LQA step. If either of them are missing at that time, the LQA report will not be generated.

IMPORTANT!

If all the cases described above have been verified, but the issue still occurs, create a support ticket for the XTM International Support team and provide all the details needed for investigation.

The LQA report is generated, but contains no errors, and its score is 100%, despite LQA errors having been added in the LQA step and the segments having been confirmed

Solution: Such a situation can occur for a couple of reasons:

  1. Ensure that the LQA reviewer imposes errors on the segments that were modified by the linguist in the step before LQA. If LQA errors are added to the segments that were not affected in the previous step, they will not be displayed in the report since they are out of the evaluation scope.

  2. Ensure that the actual linguist assigned to the task has entered XTM Workbench and modified the segments. It might be the case that, although the LQA report that was generated had the first linguist's name assigned to it (the linguist to whom the task was assigned), the LQA errors created in the LQA step were allocated with the statistics that were saved for the second linguist, so they were not included in the LQA report that evaluated the first linguist. In other words, the fact that another linguist was also working on this project caused issues with statistics. As a result, LQA errors were calculated incorrectly.

For LQA report and errors to be generated correctly, XTM Workbench must be opened by the correct linguist, to whom the step has been assigned. If there are any errors then LQA will not be calculated correctly, as shown in the case above.


If all the cases described above have been verified, but the issue still occurs, create a support ticket for the XTM International Support team and provide all the details needed for investigation.