English | Russian |
be trained using reinforcement learning | обучаться по алгоритму обучения с подкреплением (which means AI agents are given a reward as they get closer and closer to their goal. This reward is used as a signal that they're doing the right thing, and that whatever they're doing is something to learn for the future Alex_Odeychuk) |
generative pre-trained transformer | обученный генерации текста трансформер (MichaelBurov) |
generative pre-trained transformer | генеративный предобученный трансформер (Alex_Odeychuk) |
generative pre-trained transformer | предварительно обученный генеративный преобразователь (Generative pre-trained transformers (GPT) are a family of language models generally trained on a large corpus of text data to generate human-like text. They are built using several blocks of the transformer architecture. They can be fine-tuned for various natural language processing tasks such as text generation, language translation, and text classification. The "pre-training" in its name refers to the initial training process on a large text corpus where the model learns to predict the next word in a passage, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data.: According to the researchers, LLMs demonstrate “state-of-the-art capabilities” in translation quality assessment at the system level. However, they emphasized that only GPT 3.5 and larger models are capable of achieving state-of-the-art accuracy when compared to human judgments. Those findings provide “a first glimpse into the usefulness of pre-trained, generative large language models for quality assessment of translations,” they said. wikipedia.org Alexander Demidov) |
generative pre-trained transformer | трансформер, обученный для генерации текста (MichaelBurov) |
trained with reinforcement learning | обученный по методу обучения с подкреплением (Nature Alex_Odeychuk) |