Embracing Creativity over Technical Skills with LLM
In the inexorable advance of Large Language Models (LLM) like GPT-4, whi...
In the realm of artificial intelligence (AI) and machine learning (ML), understanding freedom is crucial. They are interchangeably the same concept but the distinction between AI and ML can be a litmus test for expertise: those who use ‘AI’ may have a broader, less technical perspective, while ‘ML’ indicates a deeper, more technical knowhow. This distinction is essential when discussing the recent Executive Order (EO) 14110 on AI Safety by the White House, which has stirred debates about technological control and ideological capture.
Here’s a playful analogy that everyone might understand, machine learning is akin to sex when you were in high school, everyone was talking about it, only a few actually understood it, even fewer knew how it worked, and the rest of everyone else was either trying to figure it out on their own or pretending they already understood it.
This new EO’s implications are profound, particularly in how it intersects with companies like OpenAI, which with a substantial investment might as well be a subsidiary of MicroSoft at this point. OpenAI is going on their press junket claiming to be on the cusp of Artificial General Intelligence (AGI), a claim that these models will eventually become conscious beings, while bold, serves as a strategic move in a chess game of regulatory influence and market positioning.
The concept of AGI is often shrouded in a veil of absurdity. Current models, far from being sentient, function more like sophisticated guessing engines. They represent technological advancements over existing systems like ElasticSearch or Google’s BERT, but to equate them with consciousness is a leap into anthropomorphism, betraying a fundamental misunderstanding of the technology.
This misinterpretation is not innocent; there are industry players who exploit this confusion for regulatory benefits, stoking ignorance about the nature of AI to shape policy and public perception to their advantage. The idea that a computer could gain consciousness is more a reflection of science fiction than of the grounded reality of AI’s actual capabilities and limitations.
The EO emphasizes safe and ethical AI development, but its broader implications hint at potential regulatory capture and licensing schemes. Such frameworks could unintentionally stifle innovation or concentrate power in the hands of a few. OpenAI’s willingness to align with these guidelines under the banner of ethics might mask a more opportunistic motive: gaining a favorable position in a regulated market.
This approach, however, raises questions about ideological capture. With the White House no longer having the same influence on platforms like Twitter, this deal could be seen as an attempt to control emerging technologies. It’s a strategic move to shape the narrative and content these technologies produce, under the guise of ethical compliance.
The consequences of such a deal are not just corporate. They touch on the very essence of freedom and innovation. When a government entity exerts influence over technological development, it risks creating a homogeneous technological landscape, devoid of the ideological diversity and dynamism that drive progress. The risk is not tied to any particular administration or political party; it’s a matter of principle. Should any president, regardless of their affiliations, wield such power over the technological narrative?
The demand for ethical AI is as subjective as the ever-evolving definitions of hate laws. As societal ethics shift, so too would the ethical standards imposed on technology, leading to a complex landscape where base models of AI must continuously adapt to changing ‘sources of truth’ under different administrations. This dynamic raises a critical question: Is the ultimate objective to ensure ethical compliance, or could it be a subtler strategy to cement a certain administration’s power? The fluid nature of ethics makes codifying them in AI a challenging, potentially manipulative process.
We should consider the idea of a ‘Second Amendment for GPUs’—a metaphorical call to arms ensuring the public’s right to develop and access advanced computing resources without government regulatory approvals. This concept isn’t just about hardware; it’s about safeguarding the freedom to innovate, to explore, and to create without undue government interference. It’s a recognition that the tools of AI and ML, like any technology, should be accessible and not monopolized by either corporate giants or government entities.
The conversation around the White House EO, AI, and ML is not just about the technologies themselves. It’s about who controls them, who guides their development, and what principles govern their use. It’s about ensuring that these powerful tools are used for the betterment of all, not just a few. As we navigate this complex landscape, we must be vigilant, ensuring that the pursuit of technological advancement remains coupled with the preservation of freedom and diversity of thought. The future of AI and ML should be a tapestry of ideas and innovations, not a monochrome blanket of controlled, homogenized thinking.
Tell me what problem you need me to help solve.