Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Introduction

This article helps troubleshoot issues with the generation of LQA reports.

Before consulting the below article, please, make sure you are well familiarized with the mechanism as well as prerequisites for the generation of the Linguist (evaluative) LQA report, which are thoroughly described in a dedicated article: Two types of LQA report - description and use case!


Issue description & Solution

Sometimes the LQA score in the UI and the score in the generated report differs by 0.01%

Solution: This is a well known behavior but there is no plan to address it soon.

The LQA report does not generate at all

Solution: Such a situation may happen in a couple of cases:

  1. Please, make sure that the following LQA option Yes, save results in user record is enabled in Workflow editor for the step where LQA evaluation is to be performed.

2. Please make sure that the LQA-preceding step is fully performed by the assigned linguist (LQA-related segments have to be manually confirmed to the green status in Workbench).

3. Please, make sure that the very LQA step is fully performed by the assigned linguist (LQA-related segments have to be manually confirmed to the green status in Workbench).

4. Please, make sure that both linguists (Translator and LQA reviewer) stay assigned to the respective steps after finishing the LQA step. In case either of them is missing at that time, the LQA report will not generate.

IMPORTANT!

If all the above cases have been verified, yet the issue still occurs, please, raise a formal ticket to the XTM Support team and provide all the details for investigation.

The LQA report is generated but contains no errors inside, and its score is 100% despite adding LQA errors in the LQA step and confirming the segments

Solution: Such a situation may happen in a couple of cases:

  1. Please, make sure that the LQA reviewer imposes errors on the segments that were modified by the linguist in the LQA-preceding step. If LQA errors are added to the segments that were not touched in the previous step, they will not be displayed in the report since they are out of the evaluation scope.

  2. Please, make sure that it was the actual linguist assigned to the task who entered Workbench and modified the segments.

It might be the case that whereas the LQA report that was generated was labeled for the first linguist (the one who was assigned to the task), the LQA errors which were imposed in the LQA step were allocated with the statistics that were saved for the second linguist, thus not being displayed in the LQA report which evaluates the first linguist.

In other words, the fact that another linguist was also working on this project caused issues with statistics that resulted in the wrong calculation of LQA errors.

For LQA report and errors to be generated correctly, Workbench has to be opened by the proper linguist that is assigned to the step. If there are any mix-ups, then LQA will not be calculated correctly - as in the above case.


If all the above cases have been verified, yet the issue still occurs, please, raise a formal ticket to the XTM Support team and provide all the details for investigation.

  • No labels