AI Overview3 reasons everyone is talking about an AI bubble
Story by atecotzky@businessinsider.com (Alice Tecotzky}
https://www.msn.com/en-au/technolog...1&cvid=68ad03c086dc496ab3da093919a7ea64&ei=25
...
Sam Altman's warning
OpenAI CEO Sam Altman warned that people might be getting "overexcited" about AI earlier this month.
...
MIT's eye-opening report
A recent report from MIT found that 95% of AI pilots don't generate measurable financial savings or boost company profits.
...
Meta's AI restructuring
After spending millions to build a "superintelligence" AI team, Meta is breaking up its internal AI apparatus. The four new teams will focus on research, training, products, and infrastructure.
...
None of this should affect Akida's viability.
AI chatbots such as ChatGPT using omniscient LLMs seem to have reached a plateau short of the AGI summit, but still at an altitude to induce hallucinations.
The current status of Akida's AI application is the use of oxymoronic S-LLMs (small LLMs) with RAG. Applications of S-LLMs like user manuals, being based on fact absent the opinion/bias/bigotry/ignorance which has been swept up in universal LLMs should mitigate erroneous output.
However, even if the AI bubble were to burst, it's comforting to remember that Akida has limitless applications beyond the chatbot AI. universe.
Akida has unlimited applications which do not require LLMs.
The idea of "omniscient LLMs" is a misconception, as large language models are fundamentally limited by their training data and statistical nature
. The term, however, has been used in specific contexts, such as describing a particular research model called OmniScience or critiquing certain AI-based social simulations.
Reasons why no LLM is truly omniscient
- Limited training data: The knowledge of an LLM is a snapshot of the massive, but finite, dataset it was trained on. It does not have access to private, real-time, or future information, and cannot learn from new interactions without external data retrieval or retraining.
- Hallucinations: LLMs are "statistical machines" that predict the most likely next word or token based on patterns learned from training data. This probabilistic nature means they can generate confident-sounding but factually incorrect or nonsensical information, a phenomenon known as hallucination.
- Inability to reason or reflect: LLMs lack genuine comprehension, consciousness, and the ability to reason about a user's question. They cannot intentionally self-correct or validate their own reasoning beyond statistical self-consistency.
- Information asymmetry: In a realistic, multi-agent setting, LLMs do not inherently have access to all information. For example, in social simulations, an agent given an "omniscient perspective" has a clear advantage over agents operating with information asymmetry, which is the more realistic setting.
- Epistemological limits: Any AI model's "understanding" is constrained by how humans have structured the data it learns from. The model can only mirror back the order and vocabulary that humans have already established, and cannot create truth independent of that human context.