LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263) 66663v

25/09/2024

In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for…...

In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?

Let’s separate the genius from the guesswork in this insightful breakdown of AI’s creativity problem.
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
 
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning
 
 

What Big Tech Isn’t Telling You About AI (Ep. 267) 8 meses 19:15 AI Says It Can Compress Better Than FLAC?! Hold My Entropy 🍿 (Ep. 268) 7 meses 21:05 VC Advice Exposed: When Investors Don’t Know What They Want (Ep. 269) 7 meses 17:57 Love, Loss, and Algorithms: The Dangerous Realism of AI (Ep. 270) 7 meses 24:25 AI vs. The Planet: The Energy Crisis Behind the Chatbot Boom (Ep. 271) 7 meses 22:28 Ver más en APP Comentarios del episodio 3t4a6f