Table of Contents |
---|
Introduction
This article helps troubleshoot issues with the generation of LQA reports.
Before consulting the below article, please, make sure you are well familiarized with the mechanism as well as prerequisites for the generation of the Linguist (evaluative) LQA report, which are thoroughly described in a dedicated article: Two types of LQA report - description and use case! |
---|
...
Issue description & Solution
Sometimes the LQA score in the UI and the score in the generated report differs by 0.01%
Solution: This is a well known behavior but there is no plan to address it soon.
The LQA report does not generate at all
Solution: Such a situation may happen in a couple of cases:
...
2. Please make sure that the LQA-preceding step is fully performed by the assigned linguist (LQA-related segments have to be manually confirmed to the green status in Workbench).
...
3. Please, make sure that the very LQA step is fully performed by the assigned linguist (LQA-related segments have to be manually confirmed to the green status in Workbench).
4. Please, make sure that both linguists (Translator and LQA reviewer) stay assigned to the respective steps after finishing the LQA step. In case either of them is missing at that time, the LQA report will not generate.
...
IMPORTANT!
If all the above cases have been verified, yet the issue still occurs, please, raise a formal ticket to the XTM Support team and provide all the details for investigation.
The LQA report is generated but contains no errors inside, and its score is 100% despite adding LQA errors in the LQA step and confirming the segments
Solution: Such a situation may happen in a couple of cases:
...