translations: update Italian (#2909)

Signed-off-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
This commit is contained in:
Riccardo Giovanetti 2024-08-28 02:13:34 +02:00 committed by GitHub
parent ca151f3519
commit e8d74d8bf4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 7 additions and 7 deletions

View File

@ -11,7 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed ### Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873)) - Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872)) - Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872) and [#2909](https://github.com/nomic-ai/gpt4all/pull/2909))
- Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852)) - Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852))
- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880)) - Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890)) - Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))

View File

@ -1352,7 +1352,7 @@ modello per iniziare</translation>
<message> <message>
<location filename="../qml/LocalDocsSettings.qml" line="293"/> <location filename="../qml/LocalDocsSettings.qml" line="293"/>
<source>Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.</source> <source>Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.</source>
<translation>Numero massimo di N corrispondenze migliori di frammenti di documento recuperati da aggiungere al contesto del prompt. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.</translation> <translation>Il numero massimo di frammenti di documento recuperati, che presentano le migliori corrispondenze, da includere nel contesto del prompt. Numeri più alti aumentano la probabilità di ricevere risposte basate sui fatti, ma comportano anche una generazione più lenta.</translation>
</message> </message>
</context> </context>
<context> <context>
@ -1708,7 +1708,7 @@ NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedi
<location filename="../qml/ModelSettings.qml" line="469"/> <location filename="../qml/ModelSettings.qml" line="469"/>
<source>Only the most likely tokens up to a total probability of top_p can be chosen. <source>Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.</source> NOTE: Prevents choosing highly unlikely tokens.</source>
<translation>Possono essere scelti solo i token più probabili fino ad una probabilità totale di top_p. <translation>Solo i token più probabili, fino a un totale di probabilità di top_p, possono essere scelti.
NOTA: impedisce la scelta di token altamente improbabili.</translation> NOTA: impedisce la scelta di token altamente improbabili.</translation>
</message> </message>
<message> <message>
@ -1724,7 +1724,7 @@ NOTA: impedisce la scelta di token altamente improbabili.</translation>
<message> <message>
<location filename="../qml/ModelSettings.qml" line="514"/> <location filename="../qml/ModelSettings.qml" line="514"/>
<source>Sets the minimum relative probability for a token to be considered.</source> <source>Sets the minimum relative probability for a token to be considered.</source>
<translation>Imposta la probabilità relativa minima che un token venga considerato.</translation> <translation>Imposta la probabilità relativa minima affinché un token venga considerato.</translation>
</message> </message>
<message> <message>
<location filename="../qml/ModelSettings.qml" line="550"/> <location filename="../qml/ModelSettings.qml" line="550"/>
@ -1734,12 +1734,12 @@ NOTA: impedisce la scelta di token altamente improbabili.</translation>
<message> <message>
<location filename="../qml/ModelSettings.qml" line="551"/> <location filename="../qml/ModelSettings.qml" line="551"/>
<source>Size of selection pool for tokens.</source> <source>Size of selection pool for tokens.</source>
<translation>Dimensione del pool di selezione per i token.</translation> <translation>Dimensione del lotto di selezione per i token.</translation>
</message> </message>
<message> <message>
<location filename="../qml/ModelSettings.qml" line="562"/> <location filename="../qml/ModelSettings.qml" line="562"/>
<source>Only the top K most likely tokens will be chosen from.</source> <source>Only the top K most likely tokens will be chosen from.</source>
<translation>Solo i token Top-K più probabili verranno scelti.</translation> <translation>Saranno scelti solo i primi K token più probabili.</translation>
</message> </message>
<message> <message>
<location filename="../qml/ModelSettings.qml" line="597"/> <location filename="../qml/ModelSettings.qml" line="597"/>
@ -1765,7 +1765,7 @@ NOTA: impedisce la scelta di token altamente improbabili.</translation>
<location filename="../qml/ModelSettings.qml" line="655"/> <location filename="../qml/ModelSettings.qml" line="655"/>
<source>Amount of prompt tokens to process at once. <source>Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.</source> NOTE: Higher values can speed up reading prompts but will use more RAM.</source>
<translation>Quantità di token del prompt da elaborare contemporaneamente. <translation>Numero di token del prompt da elaborare contemporaneamente.
NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzeranno più RAM.</translation> NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzeranno più RAM.</translation>
</message> </message>
<message> <message>