The Cookie Jar, August 2023
DPDPA gets real; Red Team army attacks AI Systems; Monopolies move to monetise AI music; AI models that comply with the EU AI Act...
News Chunks
All aboard, almost! (DPDPA)
India’s DPDP bill made a final dash through the Parliament this month just as the data privacy train was set to leave (or has it left?) the building. But hey, the world’s largest democracy now has a Digital Personal Data Protection Act. That accounts for a lot. We don’t want to celebrate too early, but if implemented right, could this mean the end of those spam mails and messages, at least?
Highlights: Explicit ‘consent’ will have to be sought by entities using citizens’ data, and consent gets a little more respect than on a date, i.e., saying ‘yes’ to going out will not be assumed consent to go all the way. However, the power to control your own data comes with the responsibility to make informed decisions, more so if you want to avoid fines from making “false or frivolous” complaints. There will be penalties for data breaches, meaning businesses cannot be callous with data security anymore, and must process data in necessary and proportionate ways, barring “legitimate” exceptions. This is a grey area with wide room for interpretation still. In addition, users won’t be compensated for the loss of their data and privacy in case of a breach. Using children’s data will require clear consent from parents/guardians, although in a country obsessed with sharenting, without sufficient avenues for information literacy, the knowledge of preventing online dangers is a prerogative that citizens must still seek out.
The IT Minister of State, Mr. Rajeev Chandrasekhar has addressed some of the specific concerns within the DPDP Act. Mind you, the last word rests with the Central Government, meaning ultimately ‘bauji’ has to say ‘ja citizen, jee le apni zindagi’. In India, autonomy and privacy have cultural prejudices to beat as well.
Consider this interesting Indian scenario if you would like to push arguments.
Fired! Nay rekindled.
Twitter’s fired ethics team bands together as Blue Owl to ‘shape major debates about the internet, artificial intelligence and climate’. The team is helmed by policy veteran Colin Crowell, who along with other US and global heavyweights, aims to help companies, investors and start-ups navigate internet policy, with a commitment to “an open internet vision” and “strong ethical standards on privacy and data protection.”
Good Start. Follow through?
Last month we told you about seven tech giants agreeing on self-regulation for AI development. Critics are pointing out that this is probably too good to be true. Generative AI systems generate all kinds of content, but these commitments only cover audio and visual content, completely ignoring text content that is already infusing garbage into our knowledge portals.
Companies are also not obliged to share the datasets they use to train these models, keeping regulators guessing. Other commitments such as sharing information with governments, civil society and academia, facilitating third party discovery, publicly reporting AI systems’ capabilities and limitations, and deploying AI to help solve society’s greatest challenges leave a lot of wiggle room for interpretation.
AI models that comply with the EU AI Act
None! Yes, none of the US-based foundational AI models comply with the most advanced AI regulation - the EU AI Act - as per a Stanford study. Meaning none of them can operate in the EU. Whether that means that the EU gets left out in the AI race or this becomes a catapult moment for responsible tech innovation in the EU and UK, is one to watch out for. Going by other EU approaches, the latter is likely but in due time.
More money for monopolies or music to musicians’ ears?
Google and Universal Music join hands to monetise AI generated music after recognising there’s no fighting it, in a report by FT. The key word is ‘legitimising’ the use of AI for fair use of (top) artistes’ voices. If this indeed helps creators stake a claim to their work in AI versions, then could this be a model for other image and text generators to follow?
Red team army attacks AI systems
50 minutes. 156 laptops. About 3000 hackers. Las Vegas hosted the largest ever public red-teaming exercise at the annual DEF CON, aimed at discovering the security weaknesses of LLMs. Backed by the US government in an attempt to reveal vulnerabilities of new AI systems for further training, the contest saw the Carnegie Mellon University team emerge winners and Taiwan’s TWN48 placed 3rd. A side risk at this event though - your phone or laptop could be hacked! Participants were advised to turn off their devices to watch their own backs, giving us ample reason to believe some of the best hackers were participating in the event.
AI tools - hits and misses
If you want to query the DPDP Act for some quick, simple answers, India’s own Jugalbandi from OpenNyAI invites you to play, stress test this experimental tool. We tested it and found it delivered fairly well on most queries, barring some struggle with layperson language.
Learn science, the interactive way - Play around with this experimental interactive tool by Google that dives into Leonardo Da Vinci’s notes and works.
Meta Audiocraft open-sources for everyone.
MIT's ‘PhotoGuard’ prevents AI from scraping/editing your images.
Chatbots being trained to emulate and perform human-like empathy, compassion, and righteousness are lining up use-cases, but could have dire impacts on trust and isolation.
CDF Chunks
Information literacy in Malayalam and Hindi
Litt, CDF’s online information literacy initiative, supported by FactShala, has launched two new information literacy courses in Malayalam and Hindi. These courses empower you to navigate the internet and digital media in an informed, safe and productive way.
Individuals, educational institutions, NGOs, enterprises, PTAs, or any other community can sign up or collaborate with us to take these courses to more people in their region. Want to foster safer and productive online experiences?
CDF’s Responsible AI primer on Day 1 of maker-fest
The OpenNyAI Maker Residency - a transformative and immersive five-day maker space - in partnership with TinkerSpace, brought together techies, designers, entrepreneurs, judges, lawyers and students together to a ‘generative event’ fostering tangible new behaviours, structures and solutions at the intersection of AI and justice. The communities banded as problem-solvers, mentees, mentors, peers and experts to mindfully arrive at meaningful ways of harnessing emerging technologies to scale solutions keeping Indian social challenges in mind. CDF conducted a sidebar on the philosophy and practice of Responsible AI on Day 1 of the maker-fest.
May the truth prevail
CDF conducted an interactive session with trainees and senior journalists of The Fourth News, exploring the impact of emerging information & communication technologies (ICT) and post-truth, on news and journalism. During the session, we delved deep into macro tech developments shaping today's news media landscape.
Mine Mind our data
CDF conducted a session on the importance of data governance and information security for Government of Kerala’s eHealth Kerala project team. eHealth is the State’s digital health project aimed at making health services accessible equitably to everyone in the State through a Unique Health ID (UHID) linked to the Aadhar. CDF’s session focused on why citizens’ Aadhaar and sensitive biometric and health data should be sought and processed with utmost protection, care, and accountability.
Bring your superpowers!
CDF is a non-profit, tackling techno-social issues like misinformation, online harassment, online child sexual abuse, polarisation, data & privacy breaches, cyber fraud, hypertargeting, behaviour manipulation, AI bias, election engineering etc. We do this through knowledge solutions, responsible tech advocacy, and good tech collaborations under our goals of Awareness, Accountability and Action.