• Saleh@feddit.org
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    Which are based on LLMs or other neural network models. It is kind of the thing that language models are actually good at.

    See DeepL for example: https://en.wikipedia.org/wiki/DeepL_Translator

    The service uses a proprietary algorithm with convolutional neural networks (CNNs)[3] that have been trained with the Linguee database.[4][5]
    According to the developers, the service uses a newer improved architecture of neural networks, which results in a more natural sound of translations than by competing services.
    The translation is said to be generated using a supercomputer that reaches 5.1 petaflops and is operated in Iceland with hydropower.[6][7]
    In general, CNNs are slightly more suitable for long coherent word sequences, but they have so far not been used by the competition because of their weaknesses compared to recurrent neural networks.
    The weaknesses of DeepL are compensated for by supplemental techniques, some of which are publicly known.

    • fishy@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      As someone who’s played a few LLM translated games, it is in fact not good at it. There’s a lot of contextual hints that get lost and slang terms tend to confuse it. It does make it close enough where a human that doesn’t speak/read the original language could easily finish the translation though or still make it through the game.

    • Lena@gregtech.eu
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      Yeah I know they’re based on LLMs, but they’re more adapted to translation, right?