Notes based on my experiences and reflections about my work, as well as random knowledge I find interesting.
Diving into token efficiency of other languages vs English in LLMs, exploring tokenization, model performance, and implications for multilingual AI development.
Discover the reasons behind hallucinations in Large Language Models (LLMs) and explore the underlying mechanisms that contribute to this phenomenon.
Discussing why LLMs work and the reasoning behind their behavior.
Call back to the early days of my web development journey, when I built my first website using WordPress.