[ad_1]
Since the launch of OpenAI’s ChatGPT in November 2022, conversations about conversational or generative AI have become frequent, loud and filled with predictions about the opportunities and challenges ahead.
No matter how you feel about AI, there’s no question that AI is here to stay and that it will continue to evolve because of how profoundly it is already changing the way we live, work, collaborate, brainstorm and create.
For the past three months, I’ve been digging into all things related to conversational AI to get a handle on the opportunities and risks, the companies and players working on new tools and policies, and some of the issues surrounding this new tech frontier. Each week, I’ll share a few of the more notable things happening in the world of AI that I believe are worth paying attention to as well.
Since this is my first “In the Loop on AI” recap, I’m summarizing a few of the highlights from the past month or so, with links to the source material, so you can dive in.
AI could lead to a bad ending for humanity — or not. In March, prominent AI researchers and tech executives, including Apple co-founder Steve Wozniak and Twitter owner Elon Musk, signed an open letter asking for a six-month pause on the development of AI to give the industry time to set safety standards around the design and training of these powerful and potentially harmful systems.
“We’ve reached the point where these systems are smart enough that they can be used in ways that are dangerous for society,” AI pioneer Yoshio Bengio, director of the University of Montreal’s Montreal Institute for Learning Algorithms, told The Wall Street Journal in an interview at the time. “And we don’t yet understand.”
In the past two months, we saw dueling posts about the potential threats and joys of AI. In a stark, one-sentence open letter signed by notables including OpenAI CEO Sam Altman and Geoffrey Hinton, who’s known as the godfather of AI, experts said AI could pose a “risk of extinction” along with pandemics and nuclear war. In contrast, venture capitalist and Internet pioneer Marc Andreessen, whose company has backed numerous AI startups, penned a nearly 7,000-word post on “Why AI Will Save the World.”
Which takes us to this week’s latest musings, which come from 119 CEOs from a variety of industries, who responded to a survey for the Yale CEO Summit. Forty-two percent said AI could potentially destroy humanity — 34% said that could happen in 10 years, and 8% gave it five years — while the other 58% said that could never happen and that they’re “not worried,” according to CNN’s recap of the results. In a separate question, Yale said that 42% of those surveyed think a potential AI catastrophe is overstated, while 58% said it’s not overstated.
Glad that’s all cleared up.
AI doesn’t always paint pretty pictures. What does a CEO look like? Or a drug dealer? That’s the question Bloomberg answered in its story about how text-to-image converters create a very skewed vision of the world — a vision that is more biased than already biased humans. After analyzing over 5,000 images generated by Stable Diffusion (a rival to OpenAI’s Dall-E), Bloomberg found “The world according to Stable Diffusion is run by white male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.”
“We are essentially projecting a single worldview out into the world, instead of representing diverse kinds of cultures or visual identities,” Sasha Luccioni, a research scientist at AI startup Hugging Face who co-authored a study of bias in text-to-image generative AI models, told Bloomberg. “The question is, who bears the responsibility?” “Is it the dataset providers? Is it the model trainers? Or is it the creators?”
All good questions.
The Beatles return for one last song: A new “final” Beatles song featuring the original Fab Four will be released this year thanks to AI. Paul McCartney told the BBC in June that AI was used to isolate John Lennon’s vocal track off of the demo of an unreleased song (rumored to be a 1978 Lennon composition called Now And Then).
We know that it’s possible to isolate voice tracks from recordings (hence Linda McCartney’s ear-wincing vocals on Hey Jude and Yoko Ono’s “painful” contributions to Lennon’s work).
From the BBC: “Sir Paul had received the demo a year earlier from Lennon’s widow, Yoko Ono. It was one of several songs on a cassette labelled “For Paul” that Lennon had made shortly before his death in 1980. Lo-fi and embryonic, the tracks were largely recorded onto a boombox as the musician sat at a piano in his New York apartment.”
McCartney generated so much news with this news that he posted a tweet on June 22 reiterating that it’s actually the Fab Four singing and that the AI wasn’t used to generate new vocals.
Will the new Beatles’ song be good or bad? I don’t know, but what I do know is that it MIGHT not be eligible for a Grammy. CNET reporter Nina Raemont noted that the Grammy Awards will only consider music made by humans to be eligible for the 2024 awards show that airs Jan. 31. “Only human creators are eligible to be submitted for consideration,” reads the Grammy Awards’ new rules. “A work that contains no human authorship is not eligible in any categories.” Artists can still utilize AI tools to create music but the work submitted must be “meaningful and more than de minimis.”
The $5,000 hallucination: In case you didn’t know, some AI chatbots may “hallucinate,” a polite way of saying they make up stuff that sounds like it’s true except it’s not true. Well, two lawyers in Texas found out the hard way that hallucinating, at least when it comes to submitting legal briefs in federal court, is definitely not OK.
Two lawyers who used ChatGPT to write their legal briefs were chastised by the court after it was found the chatbot invented nonexistent cases that it then cited as precedents. They were fined $5,000.
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Texas Judge P. Kevin Castel wrote in his rebuke. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
Cats, dogs, jobs: AI engines like ChatGPT don’t have human-level intelligence and aren’t even as smart as a cat or dog, Meta’s chief AI scientist Yann LeCun said at the Viva Tech conference in June. That’s because most gen AI engines trained on large language models, or LLMs, aren’t very intelligent since they’re only trained on language — and not images or video.
“Those systems are still very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely trained on text, massive amounts of text,” LeCun said. “Most of human knowledge has nothing to do with language … so that part of the human experience is not captured by AI.”
As examples, he notes that while the AI system could pass the bar exam for lawyers, it can’t load a dishwasher, which a 10-year-old could learn in 10 minutes.
“What it tells you [is that] we are missing something really big … to reach not just human level intelligence, but even dog intelligence,” LeCun said. He also said Meta is working on training its AI on video — which he says is way more complicated than text. We have “no idea how to reproduce this capacity with machines today. Until we can do this, we are not going to have human-level intelligence, we are not going to have dog-level or cat-level [intelligence].”
Meanwhile, Airbnb CEO Brian Chesky says he isn’t worried about AI taking jobs — he thinks AI will help create more startup entrepreneurs because of all the time and money AI will save on coding tasks and because you won’t need to be a computer scientist to code. Here’s an excerpt of what Chesky said, per CNBC:
“AI is making Airbnb’s software engineers more efficient, Chesky said, with 30% of day-to-day tasks that could be handled by ChatGPT-like tools within the next six months. This doesn’t mean those engineers’ jobs necessarily are at risk, he said, arguing the saved time could allow them to focus on harder, more personalized projects.
“Computer scientists aren’t the only potential beneficiaries, he said. As AI evolves, you’ll be able to tell chatbots in plain English what you want in a website and technology will build it for you, no coding languages required, the Airbnb CEO said.
“I think this is going to create millions of startups … entrepreneurship is going to be a boon,” Chesky said. “Anyone can essentially do the equivalent of what software engineering only allowed you to do five years ago.”
The downside for all those software engineers comes from Elon Musk, who said in May that it might be hard to find your work fulfilling “if AI can do your job better than you can.”
[ad_2]
Source link