TECH tycoons stand at a crossroads: champion artificial intelligence to stay in the Silicon Valley arms race, or stand by your morals.
At least this is how it seems. The rise of AI, and the fervor with which technocrats have embraced it, have received lukewarm reactions from users.
Beyond fears over the prospective culling of the job market, generative AI has faced the brunt of the criticism.
Models train on large swathes of data pulled from the Internet, including public social media profiles and content created by artists and journalists.
This is almost always taken without permission, introducing a type of intrusion into peoples’ lives that is new and unfamiliar.
Top executives are responding in stride, with some defending the development of machine learning systems while others push back.
READ MORE ON AI
James Cuda, the CEO of Procreate, made a splash with his comments on social media earlier this week.
“I really f**king hate generative AI,” Cuda professed in a video posted to the software’s official account on X.
“I don’t like what’s happening in the industry, and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into our products.”
Cuda’s advocacy for creatives is no surprise. Procreate is a mobile illustration app with tens of millions of users, crossing the 30 million mark in September 2023.
Most read in News Tech
Adobe, one of Procreate’s biggest competitors, has been embroiled in controversy regarding its development of AI tools.
Artists sounded the alarm after noticing their names were being used as tags for AI-generated imagery in Adobe Stock search results. In some cases, the AI art appeared to mimic their illustration style.
Tech experts optimistic about voice cloning AI – even as they sound alarm over devastating deepfakes
When Adobe issued a sudden policy change, customers were quick to suspect company planned to train its Firefly AI on their art.
Language in the reissued terms of use granted the company “a worldwide royalty-free license to reproduce, display, distribute, modify and sublicense” users’ work.
Adobe overhauled the change as criticism grew, but the wash of negative publicity did nothing to help the company at the time.
One could argue Procreate is steering clear of the same mistake. The company reaffirmed its commitment to its users with a series of loaded statements on its official website.
“Generative AI is ripping the humanity out of things,” the statement read.
“Built on a foundation of theft, the technology is steering us toward a barren future. We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us.”
The company reaffirmed its commitment to “the humans” and their creativity, dubbed “our greatest jewel.”
“In this technological rush, this might make us an exception or seem at risk of being left behind,” the statement finished.
“But we see this road less traveled as the more exciting and fruitful one for our community.”
If it’s a marketing strategy, it’s an effective one – but Cuda’s words have power. The CEO stands out among a sea of industry leaders who continue to push for AI, even as consumers beg them to stop.
Meta‘s Mark Zuckerberg, for one, has wholeheartedly embraced the technology and dismissed users’ concerns.
In a letter announcing the release of Llama 3.1, Meta‘s “most advanced” AI model yet, Zuckerberg spun a discussion about potential risks into an argument for why AI should be widely embraced.
After spending most of the letter touting open-source AI software as the “best way forward,” Zuckerberg divided common concerns into “unintentional” and “intentional” harm.
Examples of unintentional harm were described as “bad health advice” or the worry models may “unintentionally self-replicate or hyper-optimize goals to the detriment of humanity.”
Intentional harm, meanwhile, was summed up as the work of a “bad actor.”
“At some point in the future, individual bad actors may be able to use the intelligence of AI models to fabricate entirely new harms from the information available on the internet,” Zuckerberg wrote.
“At this point, the balance of power will be critical to AI safety. I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors.”
It seems there’s no end in sight for Meta. The company already admitted to feeding its AI with information pulled from millions of Facebook and Instagram.
Users consent by default when they sign up to use the services, and a well-hidden opt-out form only applies to users who can make a compelling legal argument for data protection.
As the European Union is one of the few places where users have sweeping data privacy laws, most users – including the hundreds of millions based in the United States – are virtually defenseless.
Even the less visible tech leaders, like Microsoft CEO Satya Nadella, have taken a stance.
“I don’t like anthropomorphizing AI,” Nadella professed during a Bloomberg TV appearance in May as he denounced systems with human-like capabilities.
“It has got intelligence, if you want to give it that moniker, but it’s not the same intelligence that I have. I sort of believe it’s a tool.”
Humanoid AI is no far-fetched concept. Meta introduced its roster of virtual assistants at the Connect event in September 2023, with likenesses modeled after celebrities.
What is ChatGPT?
ChatGPT is a new artificial intelligence tool
ChatGPT, which was launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.
It’s part of a new generation of AI systems.
ChatGPT is a language model that can produce text.
It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media.
ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions
GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.
If you prompt it, for example ask it to “write a short poem about flowers,” it will create a chunk of text based on that request.
ChatGPT can also hold conversations and even learn from things you’ve said.
It can handle very complicated prompts and is even being used by businesses to help with work.
But note that it might not always tell you the truth.
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.
The company’s latest endeavor, AI Studio, allows users to create chatbots based on an occupation, like a chef or interior designer, while larger creators can build a bot that responds to their followers.
While Microsoft has embraced responsible AI development in the past, the company isn’t in the clear.
Despite facing significant pushback, the tech giant has chosen to move ahead with the rollout of Microsoft Recall, billed as an “everyday AI companion.”
The program takes captures of a screen few seconds to create a library of searchable content, which AI then parses through.
It has got intelligence, if you want to give it that moniker, but it’s not the same intelligence that I have.
Microsoft CEO Satya Nadella
Recall’s rollout was delayed indefinitely until this week. Moving forward, it will only be available through the Windows Insiders program when it arrives on devices in October.
In May, the Information Commissioner’s Office, a UK-based watchdog, launched an inquiry “to understand the safeguards in place to protect user privacy.”
“We expect organizations to be transparent with users about how their data is being used and only process personal data to the extent that it is necessary to achieve a specific purpose,” the group wrote.
“Industry must consider data protection from the outset and rigorously assess and mitigate risks to people’s rights and freedoms before bringing products to market.”
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.
It is unclear how far the inquiry has progressed.
Relinquishing ownership of your data seems like the price to pay for living in the tech-centric 21st century.
Read More on The US Sun
Users are sharing more information with the world than ever, some willingly, some without realizing it. Perhaps it’s a necessary sacrifice as technology permeates every aspect of our lives.
However, AI has galvanized this process – and everyone, not just data privacy experts, should be concerned.
- 0 Comments
- Ai Process
- Artificial Intelligence