Close Menu
  • World
    • Europe
    • Asia Pacific
    • China
    • Latin America
    • Africa
  • U.S.
    • Education
    • Immigration
    • Abortion
    • Transportation
    • Weather
    • LGBTQ+
  • Politics
    • White House
    • U.S. Supreme Court
    • Congress
  • Sports
    • NBA
    • NHL
    • NFL
    • Soccer
    • MLB
    • WNBA
    • Auto Racing
  • Entertainment
    • Movies
    • Television
    • Music
    • Books
  • Business
    • Tariffs
    • Financial
    • Inflation
    • Technology
  • Science & Tech
    • Physics & Math
    • History & Society
    • Space
    • Animals
    • Climate
  • Health
What's Hot

Brad Merchand and Corey Perry Excel in Stanley Cup Final Game 2

June 7, 2025

Report: Bayern Finalizes Agreement with Leverkusen to Bring Taha to Club World Cup

June 7, 2025

Carson Josevar Makes His Comeback in Michigan as a NASCAR Driver

June 7, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
World on NowWorld on Now
Subscribe
  • World
    • Europe
    • Asia Pacific
    • China
    • Latin America
    • Africa
  • U.S.
    • Education
    • Immigration
    • Abortion
    • Transportation
    • Weather
    • LGBTQ+
  • Politics
    • White House
    • U.S. Supreme Court
    • Congress
  • Sports
    • NBA
    • NHL
    • NFL
    • Soccer
    • MLB
    • WNBA
    • Auto Racing
  • Entertainment
    • Movies
    • Television
    • Music
    • Books
  • Business
    • Tariffs
    • Financial
    • Inflation
    • Technology
  • Science & Tech
    • Physics & Math
    • History & Society
    • Space
    • Animals
    • Climate
  • Health
World on NowWorld on Now
Home » AI Frequently Criticized: Here’s the Solution
Physics & Math

AI Frequently Criticized: Here’s the Solution

June 6, 20256 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

The challenges posed by Big Tech’s artificial intelligence (AI) experiments do not imply a potential takeover of humanity. Large language models (LLMs) like OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama are flawed, and these issues are quite serious.

A notable instance of what is known as hallucinations involved US law professor Jonathan Turley, who was inaccurately accused of sexual harassment by ChatGPT in 2023.

OpenAI’s response seems to have effectively “erased” the issue by programming ChatGPT to avoid inquiries about its own generated content. This approach clearly falls short of fairness or adequacy. Addressing hallucinations on an individual basis after each incident is certainly not a sustainable solution.

You might enjoy this

Similar concerns arise with LLMs regarding the exacerbation of stereotypes and the ignored perspectives from the Global South. The opacity regarding how LLMs arrive at certain conclusions further undermines accountability in the face of rampant misinformation.

Following the 2023 launch of GPT-4, the recent significant shift in OpenAI’s LLM development sparked intense discussions. While these debates may have cooled, the issues at hand remain unaddressed.

The EU has enacted the AI ACT in an effort to take a leadership role in supervising the field. However, this legislation heavily relies on AI companies to self-regulate and does not adequately address the underlying problems. Tech companies have unleashed LLMs to hundreds of millions of users worldwide and continue to gather data without proper oversight.

Related: “A fool at best, and at worst deceptive and dangerous”: Why you shouldn’t buy into the hype surrounding artificial general intelligence

Moreover, recent studies and findings show that even the most advanced LLMs remain unreliable. Despite these issues, major AI firms are still reluctant to accept liability for these errors.

Sadly, the challenge of misinterpretation and replication of biases inherent in LLMs seems to be improving gradually but is not solved over time. The introduction of AI agents increases the likelihood of complications, from simplifying holiday bookings to managing monthly bill payments.

The emerging area of neurocompatible AI offers potential solutions to overcome these issues while minimizing the extensive data typically needed to train LLMs. So, what is neurally coordinated reactive AI, and how does it function?

Challenges with LLMs

LLMs operate on a method known as deep learning. They are provided with vast amounts of text data and utilize advanced statistical techniques to infer patterns that dictate what the next word or phrase in a given context should be. Each model, alongside all the patterns it learns, is maintained in a network of powerful computers housed in data centers referred to as neural networks.

Sometimes, LLMs seem to formulate responses through a method called chaining, generating multi-step responses that mimic human logical reasoning based on patterns identified in the training data.

There is no question that LLMs are a product of engineering marvels. They excel at summarizing and translating text and can enhance productivity for diligent users who are capable of spotting errors. However, their conclusions are fundamentally based on probabilities, which can lead to inaccuracies.

A common quick fix is the “loop man” approach, ensuring that humans make the final decisions when utilizing AI. Nonetheless, assigning ultimate responsibility to humans is not an effective solution, as they too can easily be misled by misinformation.

LLMs require enormous quantities of training data, leading to the necessity of incorporating synthetic data—data generated by LLMs themselves. This can perpetuate existing inaccuracies from the initial data sources, causing new models to inherit flaws from their predecessors. Consequently, the programming cost associated with achieving greater accuracy, termed “post-hook model alignment,” is increasing significantly.

Additionally, as the number of steps within a model’s reasoning process grows, correcting errors becomes progressively arduous, making it harder for developers to discern the underlying issues.

Neurosymbiotic AI strives to tackle these problems by equipping AI with a foundational set of formal rules, enabling more reliable reasoning through predictive learning of neural networks. These rules may include logical principles like “if a then b” (e.g., “if it rains, everything outside is likely wet”) and mathematical axioms (e.g., “if a = b and b = c, then a = c”). Some rules are directly input into the AI system, while others are deduced through “knowledge extraction” from the training data.

This approach aims to cultivate a faster, smarter learning AI that avoids hallucinations and organizes information into distinct, reusable elements. For instance, if the AI understands the rule that one gets wet when it rains, it no longer needs a multitude of examples to illustrate what could potentially be wet outside. This method can also be applied to unfamiliar objects.

In the model development phase, the neurosymbiotic cycle integrates learning and formal reasoning. This process involves partially trained AI extracting rules from training data and embedding this integrated knowledge within its network before advancing to further training.

This method is more energy-efficient since AI does not have to store excessive amounts of data, and it enhances accountability by facilitating oversight of how conclusions are reached and improved. Furthermore, it ensures fairness by allowing inherent rules like “AI decisions should not yield outcomes based on an individual’s race or gender.”

A New Paradigm

The first wave of AI in the 1980s, often termed symbolic AI, relied heavily on formal computational rules applicable to new data. The advent of deep learning in the 2010s is regarded as the second wave, with many treating neural response AI as the third wave.

Applying the neural mutual cooperative principle is most feasible in specialized domains where rules are clearly defined. This is exemplified by Google’s AlphaFold, which predicts protein structures and contributes to drug discovery, and AlphaGeometry, which tackles complex geometrical problems.

In a broader context, China’s Deepseek employs a learning method called “distillation”, marking progress towards neurocooperative AI. However, making neurocooperative AI fully operational across various models demands further research into identifying universal rules and enhancing knowledge extraction capabilities.

It remains uncertain to what extent LLM developers are currently exploring these avenues. They indeed present efforts to train models to think more intelligently, yet they also seem entangled in the need to scale with more data.

Ultimately, for AI to progress effectively, an adaptable system is necessary—one that learns from various examples, verifies understanding, improves data efficiency, and ensures sophisticated inferences.

In this way, well-structured digital technologies could serve as a regulatory alternative, embedding checks and balances within their architecture, potentially standardized across the industry. Although the journey ahead is long, at least a path exists.

This revised article has been republished from The Conversation under a Creative Commons license. Please see the original article.

Source: www.livescience.com

Criticized Frequently Heres Solution
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleGerman Mertz: Trump Realized He Was Open to Dialogue and Committed to NATO
Next Article Mr. Clutch: Tyrese Halliburton Keeps Delivering Key Moments for the Pacers

Related Posts

Is It Time to Take Terraforming Mars Seriously? Scientists Explore Turning the Red Planet Green.

June 7, 2025

Yankees’ Jazz Chisholm Jr. Achieves 70% Solution

June 7, 2025

AI Analysis Indicates Dead Sea Scrolls May Be Older Than Previously Believed, Though Some Experts Remain Skeptical

June 7, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Brad Merchand and Corey Perry Excel in Stanley Cup Final Game 2

June 7, 2025

Report: Bayern Finalizes Agreement with Leverkusen to Bring Taha to Club World Cup

June 7, 2025

Carson Josevar Makes His Comeback in Michigan as a NASCAR Driver

June 7, 2025
Advertisement

Global News at a Glance
Stay informed with the latest breaking stories, in-depth analysis, and real-time updates from around the world. Our team covers politics, business, science and tech, sports and health - bringing you the facts that shape our global future. Trusted, timely, and balanced.

We're social. Connect with us:

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

Brad Merchand and Corey Perry Excel in Stanley Cup Final Game 2

June 7, 2025

Report: Bayern Finalizes Agreement with Leverkusen to Bring Taha to Club World Cup

June 7, 2025

Carson Josevar Makes His Comeback in Michigan as a NASCAR Driver

June 7, 2025
Get Informed
Get the latest creative news from World On Now about Politics, Business, Sports, Science and Health.
© 2025 World On Now. All Rights Reserved.
  • Terms and conditions
  • Privacy Policy

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.