almost all systems can be framed as auto completion

input -> output

ML problems, like prediction, classification,

retrieval, input keywords -> retrieval results

code completion

google words auto completion

pinyin input

questions and answer

task -> actions

LLM is a more advanced auto completion, with long context window and more accurate results

it solved scalability issue

traditional auto completion has a small space, for both input and output

LLM extended it to super large space

large space is much more complex, and thus it looks much more intelligent

it goes beyond expectation, and sometimes human understanding, due to inherent complexity / scalability


<
Previous Post
ask good questions
>
Next Post
Make use of GenAI