วันพุธที่ 13 มีนาคม พ.ศ. 2567

Encountered problems during the uses of well-known LLM services

ChatGPT and Gemini generated wrong python codes but insisted on correction. So humans are still needed to detect any hallucination.

They are having the legal cases on copyrighted contents used in model training e.g. HarryPotter and Newspapers.

They are actually not only large language model (LLM) but also ML as they can do clustering and prediction, for example.