pctechguide.com

  • Home
  • Guides
  • Tutorials
  • Articles
  • Reviews
  • Glossary
  • Contact

How neural machine translation systems based on AI work

Just a few weeks ago, Meta presented an artificial intelligence model capable of translating into 200 languages. The bet on this technology, which has the name ‘No Language Left Behind’ (NLLB-200), is part of a project developed by Mark Zuckerberg’s company to boost its bet on the metaverse.

Almost all the technology giants, with the exception of Apple and Google, are undertaking projects to position themselves in this new virtual universe that is in full swing. But there are other, more modest companies, some of them local, that have long since initiated research efforts in this field. For some years now, the company Incyta and the GRIAL research group of the Arts and Humanities Department of the Universitat Oberta de Catalunya (UOC) have been collaborating on a series of research and technology transfer projects related to neural machine translation. The objective of the research is to develop neural machine translation systems to be integrated into the workflow of the company Incyta.

This Barcelona-based language services company has been using machine translation systems for years to carry out post-editing. This workflow based on machine translation plus post-editing makes it possible to offer a more efficient and economical translation service, while maintaining the same level of quality, to its wide range of clients: the written press, publishing houses, public administration, universities, etc.

Until a few years ago, machine translation systems offered sufficient quality only for similar language pairs, such as Spanish-Catalan or Spanish-French. On the other hand, for slightly more distant languages, such as Spanish-English, for example, the quality of machine translation was not sufficient. It was more efficient to translate the document manually from scratch.

The emergence of today’s neural machine translation systems has made it possible to obtain outstanding quality even for very distant language pairs, such as Chinese-Spanish, for example. The appearance of these systems has constituted a true revolution in the world of professional translation, since they open the door to applying the most post-edition machine translation flow to most translation jobs.

Rule-based machine translation and corpus-based machine translation

But to understand what this technological revolution is all about, it is worth remembering the two main paradigms of machine translation: rule-based machine translation and corpus-based machine translation. In the first paradigm, the rule-based paradigm, machine translation systems are developed by computer engineers and linguists who write programs, dictionaries and rules to translate a sentence in a source language into a sentence in the target language.

The development of these systems usually involves many months of work by teams of several people. Among the rule-based systems, syntactic transfer systems can be highlighted. In these systems, the sentence in the source language is syntactically parsed to automatically obtain a parse tree. This parse tree, which can be deep or shallow, is transferred to an equivalent tree in the target language using a set of rules.

Once this syntactic tree is obtained in the target language, the words are translated using bilingual dictionaries and the translated words are inflected to obtain a correct sentence in the target language. This paradigm has worked very well for similar languages that have quite similar syntactic structures. There are excellent systems using this methodology that are still in use for similar language pairs such as Spanish and Catalan.

In the second paradigm, corpus-based systems, systems are not developed, but trained. That is, the systems learn to translate from texts in the source language and in the target language. Parallel corpora, i.e., sets of segments or sentences in one language with their translation equivalents in another language, are normally used to train these systems.

Chronology of machine translation


The first corpus-based systems are statistical machine translation systems, which burst onto the market around 2005. These systems are based on the calculation of two probabilities: the probability that a given sentence in the target language is the translation of a sentence in the source language; and the probability that a given sentence in the target language is a correct sentence in that language. The first probability can be calculated from the statistics obtained from the parallel corpus; while the second probability is calculated from the statistics obtained in a monolingual corpus of the target language. This monolingual corpus can be obtained from the target language part of the parallel corpus.

Filed Under: Articles

Latest Articles

BIOS Ident

Your BIOS will most likely be stored in a 32-pin chip, which can typically be identified by a silver or gold sticker that shows the name of the BIOS company - such as AMIBIOS, AWARD or Phoenix - and a code that indicates the version of code it contains. If it's rectangular in shape, it's what … [Read More...]

How Good Schema Design Helps Keep Your SQL Server Secure

SQL Server is made to help users manage and easily access important data about their applications and systems, which is exactly why it’s essential to make sure your instance is secure. Using a schema template without customizing it to the specific needs of your project, assuming all will be fine, … [Read More...]

DIMM Memory – Computer Memory – Definition

By the end of the millennium, as memory subsystems standardised around an 8-byte data interface, the Dual In-line Memory Module (DIMM) had replaced the SIMM as the module standard for the PC industry. DIMMs have 168 … [Read More...]

Gaming Laptop Security Guide: Protecting Your High-End Hardware Investment in 2025

Since Jacob took over PC Tech Guide, we’ve looked at how tech intersects with personal well-being and digital safety. Gaming laptops are now … [Read More...]

20 Cool Creative Commons Photographs About the Future of AI

AI technology is starting to have a huge impact on our lives. The market value for AI is estimated to have been worth $279.22 billion in 2024 and it … [Read More...]

13 Impressive Stats on the Future of AI

AI technology is starting to become much more important in our everyday lives. Many businesses are using it as well. While he has created a lot of … [Read More...]

Graphic Designers on Reddit Share their Views of AI

There are clearly a lot of positive things about AI. However, it is not a good thing for everyone. One of the things that many people are worried … [Read More...]

Redditors Talk About the Impact of AI on Freelance Writers

AI technology has had a huge impact on our lives. A 2023 survey by Pew Research found that 56% of people use AI at least once a day or once a week. … [Read More...]

11 Most Popular Books on Perl Programming

Perl is not the most popular programming language. It has only one million users, compared to 12 million that use Python. However, it has a lot of … [Read More...]

Guides

  • Computer Communications
  • Mobile Computing
  • PC Components
  • PC Data Storage
  • PC Input-Output
  • PC Multimedia
  • Processors (CPUs)

Recent Posts

Hard Disk Capacity Barriers

Whilst Bill Gates' assertion that 640KB ought to be enough for anyone is the most famous example of lack of foresight when … [Read More...]

Guidelines on Troubleshooting Python Code

Whenever we write a computer program we need to verify that it works as expected. Usually we execute the code, if necessary we provide some inputs, … [Read More...]

Graphic Card Geometry

In the geometry stage, all 3D images are broken down into polygons. Each polygon is analysed and … [Read More...]

[footer_backtotop]

Copyright © 2025 About | Privacy | Contact Information | Wrtie For Us | Disclaimer | Copyright License | Authors