ptr's website

AI.

28 August, 2024

If you are reading this article, you are likely aware of the AI craze that has dominated discussions over the past few years. While the frenzy peaked in 2023, with ChatGPT making waves in the industry, it has since settled somewhat. Google also entered the scene with its own AI models, and Bing became surprisingly usable.

That period of hype has passed, and now I can share my perspective. So, what is AI? In its broadest sense, AI refers to computers exhibiting intelligence. This might sound odd, given that we already have smartphones, smart TVs, and similar devices. However, the "smart" I am referring to involves computers demonstrating human-like intelligence. AI has been around for quite some time, even if it wasn’t always marketed as such. For example, algorithms like YouTube's recommendation system and Google's search ranking are forms of artificial intelligence. Similarly, language server protocols (LSPs) in coding environments can be considered a type of AI, as they understand and provide insights about codebases. The AI we see today is built on the same foundational concepts as these "invisible" AIs. The main difference is the rise of a specific kind of AI: Generative AI.

We'll delve deeper into Generative AI (Gen AI) later, but first, it’s important to understand why AI is so significant today. Initially, AI, when not presented as a product, was designed to perform simple tasks like calculations, video recommendations, and organisational functions. It was primarily a tool for automation—an unsurprising development as companies naturally adopt automation when it benefits them. Today’s AI, however, is more than that. It can generate text, code, images, and other creative outputs, transforming from a behind-the-scenes tool into a marketable product.

One of the most prominent examples of AI as a product is Generative AI, which trains on data to produce new content such as text, images, or videos. Gen AI doesn’t just refine existing information; it creates "new" content. While AI research dates back to 1956, Generative AI gained mainstream attention with the launch of ChatGPT, which broke records by reaching 100 million users in just two months. Granted, factors like increased internet penetration and social media usage contributed to its rapid adoption, but it was still remarkable to witness the meteoric rise of Gen AI.

Following ChatGPT, competitors emerged, notably Google’s Gemini. While Gemini was a strong offering, it lacked the fanfare of ChatGPT and often played second fiddle despite its potential advantages. Bing AI also entered the fray, effectively integrating ChatGPT-like capabilities with internet search—a direct response to Gemini’s use of Google’s search indexing. Other players like Anthropic’s Claude and Mistral have since joined the competition.

In my opinion, AI is a valuable tool. It excels at assisting with repetitive tasks and serves as a great resource for avoiding boilerplate work. AI is excellent for generating outlines, proofreading, and identifying errors in writing, storytelling, or coding. Features like text-to-speech are transformative, and tools like Google Photos’ AI editing capabilities demonstrate practical, everyday uses. Applications like Gemini Live and ChatGPT’s voice features are helpful for brainstorming and validating ideas. These are just a few examples of how AI is making life more convenient, and I hope its development continues to prioritise accessibility and utility.

That said, AI is not without its flaws. It can be slow, lacks logical reasoning, and struggles with understanding context and 3D spaces. Generative AIs like Gemini are often narrowly focused and incapable of handling intricate or nuanced tasks. Fundamentally, AI is a probabilistic system—designed to calculate the most likely response—and cannot replicate human nuance or emotional intelligence.

Despite its limitations, I remain optimistic about AI. It is a growing industry with tremendous potential. However, the field is plagued by over-hype. New products often flood the market, many of which are poorly conceived or exploitative, prioritising profits over genuine utility. This trend risks damaging public trust and slowing down meaningful progress. The ethical challenges posed by Generative AI also warrant serious attention. Issues such as the preservation of artists’ rights, misinformation (or "hallucinations"), and societal disruptions caused by AI must be addressed. For instance, unchecked AI-generated misinformation could lead to severe consequences, including social unrest or worse.

Like any emerging technology, AI is surrounded by hype, which often overshadows its actual impact. Consider the case of NVIDIA and the proliferation of AI-focused products. Many of these innovations are not yet viable, creating a speculative bubble reminiscent of the dot-com era. If this bubble bursts, it could result in significant job losses and set back the AI industry by decades.

In conclusion, AI is a promising technology but, like all tools, it must be used judiciously. Overuse and exploitation could hinder its potential. I hope the industry matures in a way that prioritises safe, useful, and ethical implementations, ultimately benefiting humanity.