The Cookie Jar, November 2023
The Altman altercation; Deepfake debauchery; Humane puts a pin in it; Zuckerberg can’t face the music; Samsung strikes Gauss and more…
How the cookie crumbled at Open AI
If you happened to come across a sliver of the OpenAI debacle this month, despite everything else that’s going on in the world, then count yourself privileged.
Sam Altman was fired by the OpenAI board on November 17th. Greg Brockman (co-founder) was also removed…
Mira Murati (CTO) was made interim CEO, furthering ‘glass cliff’ conversations…
OpenAI’s partner in AI, Microsoft, was among everyone else who was blindsided…
Microsoft’s Satya swiftly swooped in to rope in Altman, Brockman, and Co. before the fiasco led to product damage or the birth of another competitor…
Emmett Shear of Twitch was appointed interim CEO…
OpenAI employees united to demand Altman’s return…
Ilya Sutskever (Chief Scientist and Co-Founder) had the quickest change of mind in wanting Altman back…
Altman returned as CEO along with everyone else who left…
Two women on the board who questioned the design intentions were replaced with an all-male board. Not the first time dissenting voices have been sidelined…
Altman agreed to an internal investigation with a now largely favourable board…
Microsoft could negotiate a hawk’s-eye view on the board…
All that in four and a half days. And it’s far from over.
The tussle between OpenAI’s ‘capped-profit’ and ‘for-profit’ camps over the ethical integrity, intent, and risks of their AGI pursuits puts the debate of who gets to decide what’s in humanity’s best interests right at the centre. OpenAI researchers have been warning the board of a potential new ‘superintelligence’ discovery dubbed as Q*. We are paying closer attention to the actual present risks that the AI Hype smokescreen often camouflages.
Data breach, deja vu?
In early October, US-based cybersecurity firm Resecurity’s HUNTER (HUMINT) unit identified that millions of Indians’ personal information, including Aadhar cards, was available on the dark web. According to Resecurity, the threat actor who goes by ‘pwn0001’, is willing to share this information on breach forums for $80,000. With a majority of Indians linking practically everything to their Aadhar cards, this is extremely concerning. Remember the CoWin data breach back in June? This is just another reminder of the cracks in the Aadhaar wall that ought to be fixed as a priority now.
Deep(ly) fake and deeply troubling
Actor Rashmika Mandanna finding herself at the centre of a convincing deepfake has brought much-needed attention to the extent of damage that unfettered and misguided use of emerging technologies poses to society. The sophistication of Generative AI is making deepfakes more convincing, while the large-scale accessibility of such tools is directing their misguided use towards girls and women more than ever. PM Modi also cited the issue of deepfakes as “worrying.” We have highlighted the profound danger that deepfakes pose for online child safety and teen mental health, in our last two issues.
YouTube has taken a strong stance to fight deepfakes on their channel including labels identifying synthetic content, removal of potentially harmful synthetic content, and requests for removal of videos that simulate an identifiable individual, among others. It’s important to note that such steps are often rolled out in select markets only. India has warned Facebook and YouTube to take stringent action against deepfakes in accordance with Indian laws and is also drawing up new rules to regulate deepfakes. The new regulation for media and broadcast in India, however, is a double-edged sword to watch.
AI Pin - How humane?
The Humane AI pin has been all the buzz in tech communities this month. ‘End of the smartphone’, ‘AR/XR in your pocket’, ‘sci-fi’... are all the enraptured phrases accompanying this launch. Data ethicist Brandeis Marshall points out what this could mean for privacy, and Forbes questions the absence of accessibility features. Futurist Sinead Bovell, in conversation with strategic foresight professor Alexander Manu, points out where all paths converge in the future. It is worth noting that Sam Altman is a majority shareholder at Humane.
Zuckerberg won’t listen…
…to his own top executives, who kept warning him of the threats to child safety and other issues emerging from Meta’s engagement-hungry Algos. The unambiguous message of 40 US states suing Meta for propagating addictive content was loud and clear in this unredacted Massachusetts vs. Meta complaint. This 100-page-long lawsuit claims that between 2018 and 2022, the company repeatedly chose not to make investments to improve young users’ mental health, citing filters and other harmful content as the main pain points. Top lawyers believe that if Meta is found guilty, then we could see a plethora of new regulations, safeguards, and procedures around social media.
Nor would Meta truly make amends for its footprint in history when it comes to engineering elections. Meta walked back its decision to show less political content on people’s news feeds just a couple of years after announcing it. POV Meta: another opportunity to make hay while the sun shines.
Free lunches that only cost your soul: Meta now plans to offer an ad-free subscription model to its users in the EU. This is quite the shift from when Facebook would proudly display “Facebook is free and always will be” as you logged in. If you choose not to pay, you agree to be tracked and profiled by Meta. The choice is yours - your currency or your privacy!
Smells musky everywhere
From being a tech magnate to now disrupting geopolitics, Elon Musk is upsetting more than X. He had a lot to say in his conversation with British PM Rishi Sunak at the Bletchley Summit in London. Aside from the customary lip service to AI regulation, even as he unleashed a rogue Grok, he also said AI is “a force for good most likely, the probability of it going bad is not zero percent.” Musk’s apparent endorsement of anti-Semitic, and other conspiracy theories has made top advertisers pull out of X, and he has since then gone “thermonuclear” on Media Matters for their report.
Chiming in on trustworthy AI
The US has finally woken up to its lack of regulation on AI! On October 30th, the White House released its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which touches upon some of the most pressing issues of our time - vulnerabilities in generative AI, risks of harm to the civilian human population, national security implications of AI, and guidance for AI watermarking, to name a few.
While the order is lauded as an important step towards AI regulation, many are pointing out that the lack of data privacy legislation still remains unresolved and will continue to be the elephant in the room until action is taken.
Now reading, video
Twelve Labs, a San Francisco-based startup, launched Pegasus-1, a video language foundational model. This can map natural language to what’s happening inside a video, including actions, objects, and background sounds, allowing developers to create apps that can search through videos, classify scenes, and extract topics from within those videos.
Some of the most obvious benefits of this are content moderation and product tagging. It can even help with work safety by keeping a tab on safety compliance in industrial settings like factories.
Phishing trawler
It’s now established that misguided AI has the potential to harm us beyond our current knowledge and comprehension. So, what happens when AI is weaponised specifically to harm us? How bad can that get?
Hackers have loved phishing emails for years now, but AI has made things a whole lot easier for them. What used to take days, now takes minutes and can be done at scale! With AI’s sophisticated data analysis and text-generation capabilities, hackers are able to send even more convincing phishing messages, causing untold amounts of damage.
Gauss who?
The latest to jump on the generative AI bandwagon is Samsung with the launch of Samsung Gauss, named after famed mathematician Carl Friedrich Gauss, who established the normal distribution theory, popularly known as the bell curve. According to the company, the name "reflects Samsung's ultimate vision for the models, which is to draw from all the phenomena and knowledge in the world in order to harness the power of AI to improve the lives of consumers everywhere."
CDF chips
HackFake hackathon
CDF and Tinkerhub joined forces to organise HackFake, a first-of-its-kind humanities and tech hackathon aimed at addressing the critical need for innovative solutions to tackle misinformation, particularly in the Indian vernacular. Journalism students of VNS College Konni and Calicut University, along with engineering students and senior AI/ML engineers worked on the database and through the 36-hour hackathon to build this kit. The kit has been kept open-source to allow researchers and news organisations to build on it. Mathrubhumi, The Fourth News, and Dool News supported this initiative.
‘Defeating Digital Distractions’ for high school students
CDF conducted sessions on 'Defeating Digital Distractions' at Trivandrum International School (TRINS) sparking lively discussions on micro-trends, cliques, pop culture, and social media's role in identity formation. CDF is also exploring the possibility of creating a ‘Good Tech Squad’ at TRINS as a peer-to-peer support group.
‘Post-truth Communications’ for Mathrubhumi Media School
CDF conducted an interactive session on 'Post-Truth Communications: Role of Media and Information Technologies of the Future' for the new batch at Mathrubhumi Media School (MBMS). The session covered diverse topics, including systemic issues, polarisation, and the impact of Generative AI on the general population. “Media outlets are driven by metrics like clicks and shares, leading to a surge in clickbait and sensationalism, as attention-grabbing headlines take precedence over nuanced reporting,” recognised Mr. Shajan Kumar, Dean of MBMS
CDF at the Indian Psychiatry Society, Odisha state conference
At the 33rd Annual Conference of Indian Psychiatry Society’s Odisha branch, CDF co-founder Nidhi Sudhan spoke on tech’s interactions with mental health. Her session, 'Extractive Technologies: A Race to the Bottom of the Brainstem,' covered topics ranging from behavioural design to the impact of Generative AI. The doctors’ body has expressed intent to commission new research exploring the relationship between mental health and digital media in the Indian context.
CDF at Explorica 5.0
Nidhi Sudhan, CDF co-founder, spoke on 'AI: Promise and Perils' at the LBS Institute of Technology for Women's flagship event, ‘Explorica 5.0.' The session covered the advantages and challenges of AI and covered the philosophy of AI Ethics, AI bias, and systems thinking.
‘Defeating Digital Distractions’ for the students of Karta Initiative
The Karta Initiative, an organisation dedicated to aiding students who face barriers in life, invited CDF to conduct a virtual session on ‘Defeating Digital Distractions’. In the post-session Q&A, the attendees engaged with us to ask crucial questions about fact-checking, the dark web, and managing emotions when interacting with digital media.
Job openings at CDF!
CDF is a non-profit, tackling techno-social issues like misinformation,
online harassment, online child sexual abuse, polarisation, data & privacy breaches, cyber fraud, hypertargeting, behaviour manipulation, AI bias,
election engineering etc. We do this through knowledge solutions,
responsible tech advocacy, and good tech collaborations under our goals of
Awareness, Accountability and Action.