Abstracts: July 29, 2024

LI LYNA ZHANG: Thank you for having me.

HUIZINGA: So let’s start with a brief overview of your paper. Tell us about the issue your research addresses and why it matters.

ZHANG: OK, so this paper is about how to effectively extend the context window of large language models beyond 2 million tokens. Why this is important? Because enabling longer input contexts can improve LLM capabilities. Right now, some LLMs can only handle a limited context window of 4K tokens, which is about 10 pages in a book. With our method, we can push LLM context window to over 2 million tokens. That means you can put all seven Harry Potter books to the LLM and ask any question about this story! Another important thing is that our method

 

 

To finish reading, please visit source site

Leave a Reply