InTDS ArchivebyRohit PatelUnderstanding LLMs from Scratch Using Middle School MathIn this article, we talk about how LLMs work, from scratch — assuming only that you know how to add and multiply two numbers. The article…Oct 19, 202481Oct 19, 202481
Fabio MatricardiSmarter, not Bigger — 1 Million token context is not all you need!Open source at the test bench to verify long context information extraction. Are they really that good?Feb 28, 20243Feb 28, 20243
Fabio MatricardiHow I Built a Chatbot that Crushed ChatGPT with Zero Cost AI ToolsChallenge Accepted! How I created a chatbot that surpassed the performance of the famous ChatGPT model using free and open source AI tools…Feb 4, 202415Feb 4, 202415
InWalmart Global Tech BlogbyIshant SahuLLM Fine Tuning Series1st in series : In Context LearningDec 5, 20231Dec 5, 20231
Chris Kuo/Dr. DatamanGenAI model evaluation metric — ROUGEThe most used evaluation metrics in text summarizationMay 21, 2023May 21, 2023
Kalia BarkaiInterpreting an NLP model with LIME and SHAPBlack-box models are gaining popularity in decision-making roles. Let’s ensure model interpretability keeps up!Mar 9, 2020Mar 9, 2020
Michelle Yi (Yulle)Crunching the Numbers: How Quantization Makes Large Models Byte-sizedIn this article, we continue our exploration of memory by looking at quantization and how it can unlock LLMs for a broader audience.Jul 11, 2023Jul 11, 2023
Dasha Herrmannova, Ph.D.Complete guide to running a GPU accelerated LLM with WSL2This is probably the easiest way to run an LLM for free on your PCJul 4, 20233Jul 4, 20233