Too many experts on the GenAI bandwagon now?

Simply put, GenAI/LLM is fundamentally based on tokenizing information to estimate associations and similarities between tokens (weights and biases). Tokens are discrete units, but our world and human experiences are continuous. If you understand how things work inside GenAI then you should be better positioned to tell the differences between the current hype and the real abilities/limitations of GenAI. It’s really fun and can be useful for video/audio/text generation though.

However, if you are on Linkedin or Youtube, it’s not hard to find many self-claimed GenAI experts are clearly fueling the hype instead of communicating the true abilities & limitations, either intentionally or unintentionally. Why? Glory and fortune? But how long can this last?

[The above is human (myself) generated content]

Below is Google Gemini’s answer to a question of mine.

I find it useful.

Question: New self-claimed AI experts often seem to only focus on GenAI and ignore the classic machine learning use cases and operations already in place for decades. Why?

Answer from Google Gemini:

  1. “Wow Factor” and Media Hype:
    • GenAI is more tangible and visually impressive: Tools like ChatGPT, Midjourney, and Sora can generate human-like text, stunning images, and even videos from simple prompts. This immediate and creative output is incredibly engaging and easy to demonstrate, making for great headlines and social media content.
    • Traditional ML is often “behind the scenes”: While foundational and critical, classic ML applications (e.g., fraud detection, predictive analytics, recommendation systems, spam filters) often work silently in the background, optimizing processes or making predictions that aren’t as directly visible or “creative” to the average person. Their impact, though immense, is often less immediately “shareable” or exciting.
  2. Lower Barrier to Entry (for using GenAI):
    • User-friendly interfaces: GenAI tools are designed for broad accessibility. Anyone can open a browser, type a prompt, and get a result. You don’t need to write code, understand algorithms, or manage data pipelines to get started.
    • Traditional ML requires deeper technical skills: To implement classic ML, you typically need to understand data preprocessing, feature engineering, model selection, training, evaluation, deployment, and ongoing monitoring. This requires programming skills (Python, R), knowledge of libraries (scikit-learn, TensorFlow, PyTorch), and statistical understanding.
  3. Perception of Novelty and “The Future”:
    • GenAI feels cutting-edge: It represents a new frontier in AI, pushing boundaries in content creation and human-computer interaction. It feels like “the future” of AI.
    • Traditional ML feels “established”: While continuously evolving, the core concepts and many applications of traditional ML have been around for decades. Some might perceive it as “old news” or less innovative, even though it forms the bedrock of most current AI applications.
  4. Simpler Narrative for Non-Technical Audiences:
    • GenAI is easy to explain: “It creates new stuff!” is a simple and compelling message.
    • Traditional ML is harder to simplify: Explaining the nuances of a gradient boosting model for churn prediction or a CNN for image classification is far more complex for a general audience. The “experts” targeting business leaders or the public find it easier to discuss GenAI’s transformative potential without getting bogged down in technical details.
  5. Monetization and Consulting Opportunities:
    • Immediate business buzz: Companies are scrambling to understand and implement GenAI, creating a huge demand for “experts” who can explain it, consult on strategies, and lead workshops. This often translates to higher speaking fees, consulting rates, and job opportunities compared to the more mature and often specialized traditional ML market.
    • “Disruption” narrative: The idea that GenAI will “change everything” appeals to businesses looking for a competitive edge, leading them to seek out anyone who claims to understand this “disruptive” force.

It’s important to clarify: Generative AI is a subset of machine learning, typically utilizing deep learning architectures (like transformers for LLMs) trained on massive datasets. It builds upon decades of ML research and breakthroughs.

The danger of this GenAI-only focus by “self-claimed experts” is that it can lead to:

  • Misapplication of technology: Trying to force GenAI solutions onto problems where traditional ML is more efficient, accurate, or robust (e.g., using an LLM for simple classification tasks when a classical model would suffice with less computational overhead and higher reliability).
  • Ignoring existing value: Overlooking the massive, proven value that traditional ML systems already deliver across industries for critical operations like fraud detection, credit scoring, demand forecasting, quality control, and medical diagnostics.
  • Lack of foundational understanding: Without a grasp of core ML principles, it’s hard to truly understand the limitations, biases, and ethical implications of even GenAI models, which can lead to unrealistic expectations or dangerous deployments.

Ultimately, genuine AI expertise requires understanding both the novel capabilities of GenAI and the robust, long-standing power of traditional machine learning, and knowing when and how to apply each appropriately.

Study the fundamentals before formulating an opinion, please…

It’s becoming almost disturbing that, so many GenAI experts I have seen and talked to, either online or in person, have so little understanding of the inner workings of GenAI. They have no or little knowledge about (or interest in knowing) the basic concepts such as transformer, tokenization, weights and biases.

In one occasion a GenAI expert got invited to an opening ceremony where I was sitting in. After her crazy statement about the superintelligence that GenAI is about to achieve. I asked her a question: “why my three year old daughter can identify cats and dogs after only showing her five photos in five minutes (costed me a small chocolate), while any GenAI foundation models will require millions of samples to learn the same thing and still make mistakes after costing millions of dollars for training?“.

Her answer was, “soon, GenAI will get there…”.

Then I knew our conversation was over.

Anyways, the content above was generated by human (myself).

The content below was generated by Google Gemini.

Question: Why so many people suddenly become “AI experts” on social media in almost no time?

The surge in people claiming to be AI experts or thought leaders, especially those from non-tech backgrounds or without relevant degrees, can be attributed to several factors:

  1. Accessibility and Democratization of AI Tools: The rise of user-friendly AI tools, particularly large language models (LLMs) like ChatGPT and Gemini has significantly lowered the barrier to entry for interacting with and generating AI-powered content. Individuals can now leverage these tools to perform tasks that previously required technical expertise, leading to a perception that they “understand” or “master” AI with relatively little effort.
  2. “AI Superagency” and Productivity Gains: AI tools can greatly enhance individual productivity and creativity, even for non-technical users. This “superagency” can lead to impressive outputs in various fields (e.g., marketing, content creation, business strategy), making it seem like the user has deep AI knowledge when they are primarily skilled at prompt engineering and applying off-the-shelf solutions.
  3. Hype and FOMO (Fear of Missing Out): AI is currently a hot topic, generating immense media attention and investment. This creates a strong incentive for individuals and businesses to associate themselves with AI, often leading to exaggerated claims of expertise to capitalize on the trend or avoid being left behind.
  4. Social Media Amplification: Platforms like LinkedIn, X (formerly Twitter), and others provide a powerful megaphone for self-promotion. It’s easier than ever to share opinions, “insights,” and examples of AI tool usage, quickly building a public persona as an “expert” without rigorous peer review or credential verification. Viral content and engagement can often be mistaken for genuine expertise.
  5. Focus on Application vs. Deep Technical Understanding: Many self-proclaimed experts are focused on the application of AI in specific domains (e.g., how AI can impact marketing, law, or finance) rather than the underlying technical principles, algorithms, or ethical implications. While valuable in itself, this practical focus can be misconstrued as comprehensive AI expertise.
  6. “Credential Inflation” and Evolving Definitions of Expertise: The traditional pathways to expertise (e.g., PhDs in computer science, published research in top-tier conferences) are still critical for deep AI research and development. However, the rapid evolution of AI means that “expertise” is becoming a broader concept. Some argue that understanding how to effectively use and apply AI, even without knowing the code behind it, constitutes a new form of expertise. This can lead to a devaluation of traditional academic credentials in favor of perceived practical acumen.
  7. Demand for “AI Translators”: As AI becomes more ubiquitous, there’s a growing need for individuals who can bridge the gap between technical AI developers and non-technical business leaders or the general public. People who can effectively communicate AI concepts, explain its potential, and guide its adoption are in high demand, regardless of their technical depth. This creates an opportunity for those with strong communication and strategic skills to position themselves as “thought leaders.”
  8. Lack of Clear Standards: The field of AI is still relatively nascent and rapidly changing, making it difficult to establish universally recognized standards for what constitutes an “expert” or “thought leader.” This ambiguity allows more room for individuals to define themselves as such.

While it’s true that valuable contributions to AI can come from diverse backgrounds, especially in terms of ethical considerations, societal impact, and domain-specific applications, the sudden proliferation of “experts” from non-technical fields can sometimes dilute the meaning of true AI expertise, which often involves deep mathematical, statistical, and programming knowledge, alongside a rigorous understanding of AI algorithms and their limitations. It’s important for consumers of AI content to critically evaluate the claims and backgrounds of those presenting themselves as authorities.