Pusat Publikasi Ilmiah

PPI
24 September 2025, 15:09

Exploring The Effectiveness of In-Context Methods in Human-Aligned Large Language Models Across Languages

Oleh : itspublikasi | | Source : -
  • Ubaidillah Ariq PrathamaInstitut Teknologi Bandung
  • Ayu PurwariantiInstitut Teknologi Bandung
  • Samuel CahyawijayaCohere, United Kingdom
Views: 331 Downloads: 199

DOI:

https://doi.org/10.12962/j24068535.v23i2.a1323

Abstract

 

Most of past studies about in-context methods like in-context learning (ICL), cross-lingual ICL (X-ICL), and in-context alignment (ICA) come from older, unaligned large language models (LLMs). However, modern human-aligned LLMs are different; they come with chat-style prompt templates, are extensively human-aligned, and cover many more languages. We re-examined these in-context techniques using two recent, human-aligned multilingual LLMs. Our study covered 20 languages from seven different language families, representing high, mid, and low-resource levels. We tested how well these methods generalized using two tasks: topic classification (SIB-200) and machine reading comprehension (Belebele). We found that utilizing prompt templates significantly improves the performance of both ICL and X-ICL. Furthermore, ICA proves particularly effective for mid- and low-resource languages, boosting their f1-score by up to 6.1%. For X-ICL, choosing a source language that is linguistically similar to the target language, rather than defaulting to English, can lead to substantial gains, with improvements reaching up to 21.98%. Semantically similar ICL examples continue to be highly relevant for human-aligned LLMs, providing up to a 31.42% advantage over static examples. However, this gain decreases when using machine translation model to translate query from target language. These results collectively suggest that while modern human-aligned LLMs definitely benefit from in-context information, the extent of these gains is highly dependent on careful prompt design, the language’s resource level, language pairing, and the overall complexity of the task.

https://juti.if.its.ac.id/index.php/juti/article/view/1323

Berita Terkait