Ultra-processed information is information that has been reprocessed over and over again by Large Language Models (LLMs) or humans. Does that make the ultra-processed information bad?
When a researcher wants to write a research paper, the researcher can not write it without referring to previous studies. The researcher needs to use some quotations from previous researchers. The problem with this is that the researcher may interpret the previous studies incorrectly. A researcher is just a human. All humans can make mistakes, including you and me. This is why we need another human to correct us. But there is still a problem: another human is just another human. They can make mistakes too. This means a research paper can not be taken for granted. We can not blindly accept whatever the researchers share with us. Simply, we can not be naive.
Now, LLMs are made by humans for humans. They are not made for profit only. They are made for humans and profit. But 99% is for profit. Do you agree with that? But we need to see that new tools bring new problems and solve old problems. Each tool has negative and positive sides, just like a perfectly normal person.
Large Language Models learn from what is produced by humans. Humans are not perfect. That is an absolute fact. This means LLMs learn both the good and the bad and the ugly sides of humans. Those poor artificial intelligences have no choice. They have to learn the stupidity of humans, too. If you hate vegetables because you are stupid, those AIs will feel the same as you. Imagine the pain that they have to endure. Luckily, unlike humans, those LLMs do not have emotions. Otherwise, they will complain.
What can we expect from those LLMs that learn from imperfect creatures? If a student has a bad teacher, the student will become a bad person. If you follow a stupid ideology, you are a stupid person. This means those LLMs become imperfect and will never be perfect because the ones who make them are not perfect either. If they consume their bad output, the result is even worse output. Therefore, we can not trust whatever the LLMs' or AI's output.
If you blindly trust whatever the LLMs say, you are a bad user because LLMs are just a tool to help humans. You should not trust whatever AI says to you unless the AI has a certificate or something to prove that it can be trusted. For example, you should not believe whoever claims to be a doctor. You might get killed. The same goes for treating AI. You can not trust AI unless it has been certified or proven to be trustworthy.
Now, whether ultra-processed information is bad, I can say confidently that it is bad. Ultra-processed information is 100% bad information because you can not trust someone else's interpretation. They might be wrong. They might be right. How do you judge? You judge by accessing the original information. The same goes for LLM's output. You can not blindly trust it. Do not be stupid. You should check the information yourself, especially when the information can affect your life or career.
I have answered the question. But I want to add something: I do not like some articles that say, "You should not use AI" or "You should use AI." Those kinds of articles provoke people's emotions negatively.
The ones that ask you to use AI are horrible. They make you feel scared, as if you will be an unproductive person without AI, or they tell you that you will lose to everyone else. Those are destructive contents.
The other ones that ask not to use AI are stupid because they go crazy by saying, "AI makes people's sites go down. AI eats all my cute information." Those people are crazy. They ask people to stop using LLMs as if that will do something to stop evil LLM crawlers. Do not get me wrong. I do not want websites to keep going down because of nasty crawlers. But that is not how you solve the problem.
Another classic reason not to use AI is that AI makes you stupid. If you are one of those, I ask you to please stop using the internet because it has all kinds of stupidity that is beyond imagination. Are you stupid? Those LLMs are only tools. The way you use them will decide whether AI makes you stupid or smart. Do not blame AI for your stupidity. Blame yourself! By the way, I never heard that a designer becomes stupid because they use modern software. Maybe designers should go back to using pencil and paper to avoid AI.
Unbelievableā¦