Dear Reader
Welcome back. As OpenAI’s new video generation provides the bright, distracting fireworks over there, we’re here to remind you that the really interesting stuff is happening where generative AI technology becomes part of the mundane.
And there’s nothing more everyday in many people’s working lives than a meeting, is there?
In a client workshop this week, my partner for an experiment design exercise was a thoughtful, senior executive who wanted to solve a recurring headache: making meetings less of a grind. Once we got past the obvious automations – agendas, notes - and into more interesting areas of innovation, fundamental questions about the nature of work started to come up. Thinking about what’s possible right now with AI tools is like that a lot - you pull at a thread and then all sorts of things start to unravel.
Anyway, that got me thinking about meeting notes. The other day, I used an AI note-taking app to record a meeting with someone interested in working with us. I sent the link to a page containing the instant summary and actions from the meeting to them straight away as I’d another meeting to go to. They loved it. We’ll be working together soon, but I think the actual first sale I made was of someone else’s start-up app.
The slippery slope of AI meeting notes
Meeting transcription apps like Otter and Grain continue to find their way into boardrooms. We at Brilliant Noise have used tools like Otter.ai for years: simplifying research interviews, creating notes – they boost meeting productivity.
But it’s awkward. We can't just whip out these AI note-takers without checking everyone's comfortable. I’ve started including a friendly disclaimer in meeting invites along the lines of:
"We sometimes use an AI note-taking bot. Recordings and transcripts go towards notes only and are regularly deleted. If you'd rather not, just say – we'll declare the meeting 'Bot-Free', no questions asked."
Rules Matter (And Not Just For Taste)
Our policy boils down to:
• Everyone says yes, gets the transcript
• Opt-out whenever, no fuss
• Aligns with privacy policy & local law
Meeting AI etiquette goes beyond whether you find a robo-scribe irritating. More complex AIs aren't just writing minutes, they're analysing. Fathom Video tells you if you're monologuing – that stung during my pitch! On the other hand, tools like Read AI, which analyse tone and sentiment, rightly make us nervous. One person's neutral is another's rage! Algorithms have a track record of mistaking and misinterpreting black faces, for instance. That data, in the wrong hands, is risky.
AI meeting tech is powerful, but the uses may get ethically murky. Imagine this slippery slope:
• "Can my bot take notes?"
• "How about it monitors your face for engagement?"
• "Oh, this app flags dishonesty – we cool with that?"
Meeting notes are still relatively new, but we'll soon need to figure out what's acceptable. It's a question of balancing productivity with respect, privacy, and the potential for AI bias, particularly for marginalised and underrepresented groups [1].
[Perplexity] Another form of attack on robots involves the deliberate immobilisation of autonomous vehicles. In San Francisco, an activist group known as Safe Street Rebel has been placing traffic cones on driverless cars to incapacitate them as a form of protest against the perceived overreach of tech companies in urban spaces. This group has cataloged numerous incidents involving autonomous vehicles, such as running red lights and blocking emergency operations, which have raised safety concerns.
Driverless cars get the Cone of Shame
If the Luddites had smartphones, this is how things would have looked in the Lancashire mills.
For a while in San Francisco, an activist group called Safe St Rebels have waged a campaign of “coning” driverless cars. Placing a traffic cone over one of the car’s sensors renders it useless.
The protests are as much about cars in public spaces as robots, but the group shares a lot of videos of near misses with driverless cars and pedestrians. (There’s no suggestion that this group was involved in the 11th February attack on the Waymo car, by the way.)
Sponsor message: Read This week’s BN Edition from Brilliant Noise.
In this week’s issue: Google Advanced makes it a proper Big Tech AI race with ChatGPT, trillion-dollar investments in chips, and more about money.
GPT of the week: ChatBlackGPT
Erin Reddick is blazing a trail of work and talks to help address the biases of Big Tech’s AI systems. Her ChatBlackGPT project is now available as a GPT on the ChatGPT store (you need a ChatGPT premium subscription to access it for now).
Bear in mind that it has an American bias itself. For instance, it included “People of Color” as a term acceptable in some circumstances, which it generally isn’t in the UK. But when I asked about differences like this it was able to give specific advice that aligns with the training I’d had from The Diversity Trust at Lighthouse and Brilliant Noise.
Here are some considerations specific to the UK context:
Specific Ethnic or Cultural Identifiers: In the UK, like elsewhere, it's often best to use specific terms that people use to self-identify. Terms like "British Asian," "Black British," and "British African" are common. The UK census and other official documents provide categories that many people might use to describe their ethnicity, but self-identification remains key.
BAME (Black, Asian, and Minority Ethnic): This term is widely used in the UK to refer to people of non-white descent. However, its use has been debated. Critics argue that it can be too broad, masking the distinct experiences and identities of different groups. Some people and organizations are moving away from its use in favor of more specific descriptors or other terms that do not lump all non-white people together.
People of Colour: This term is also used in the UK, but less frequently than in the US. Its appropriateness can depend on the context and the preferences of the individuals being described.
Dual or Multiple Heritage: This term is sometimes preferred over "mixed-race," particularly in official contexts, as it emphasizes the idea of belonging to more than one ethnic group without implying a mixed background is unusual or problematic.
The cost of production of content will move to zero (reprise)
The FT ran a long-read article on how AI is impacting the advertising industry earlier this week.
AI summary: The use of artificial intelligence (AI) is transforming the business of advertising by making it easier and cheaper to deliver high-quality campaigns and enabling hyper-personalised advertising that is fast and cheap to produce at massive scale. AI-powered ad campaigns are already being rolled out, and major advertising agencies such as WPP and Publicis are embedding AI in their businesses. However, some advertising executives warn that this strategy could risk talking themselves out of a job, especially given the huge threats the industry is already facing from the decline in traditional advertising and the rise of big tech platforms. The use of AI is expected to replace many of the industry’s traditional functions, from creating ads to placing them in front of consumers, and to create bespoke marketing for individuals at a global scale.
While the big advertising holding groups are emphasising their vision and the technology’s potential, the reality is that a lot of jobs will change. Some of them will change by not existing any more.
“Generative AI completely lowers the barriers to entry for production and creativity, making it easier for clients to move work in-house,” says Ajaz Ahmed, founder and chief executive of WPP-owned marketing agency AKQA. “That could mean a race to the bottom when it comes to production fees.”
The piece also quoted the he’s-so-quotable Rory Sutherland, saying some sensible things about not relying on data.
However, Ogilvy’s Sutherland worries that traditional marketing principles and values could be lost in the race for innovative ways to reach audiences. He argues that the industry is going from an obsession “with simple reach and scale to an age that’s obsessed with targeting and cost”. Some have taken this to “a slightly dangerous extent”, he continues, by focusing too heavily on easily measurable sales data linked to targeted online advertising at the expense of the powerful brand building that used to be at the heart of an ad agency pitch to its clients.
“We’ve almost certainly got the ratio wrong, not helped by the fact that there are a lot of people who sell data solutions as though they’re the answer to everything,” he adds.
Advertising is in a vicious circle where the thing that can be measured is easier to buy so intangible value gets left to people who understand how brands work.
Why Traitors Is A L
I got some side-eyes and sneers for mentioning I was enjoying The Traitors (UK version) last week. Apart from the catharsis of shouting at the TV, there’s some useful things in there too. Watching the contestants struggle with their cognitive biases and uncertainty is a lesson in how hard decision-making can be when you’re forced to make a call under pressure. Amy Edmondson’s Right Kind of Wrong reminds us: rejection by a group is a survival fear - it is primal.
University of Virginia professor James “Jim” Detert and I consider being rejected by a group as a survival-based prepared fear. The risk of coming up short in the eyes of an authority such as one’s boss triggers a prepared fear in the brain related to being expelled.
Why does that matter?
Well, you don’t want to be thought of as the one who is wrong. The doom-monger when everyone would prefer to think that things will carry on as they always have done. It’ll be alright in the end. We want to be liked. Or at least we want to be hated as little as possible. And being right isn’t always the best way of achieving that.
Hence the sequence of resistance by managers to learning a new technology sounds goes:
It’s a load of nonsense. (That’s comforting to hear.)
It’s real but not as big a deal as they say it is. (Reassuring.)
It’s a growing issue. We have a team on it. Sit tight and don’t do anything rash.
It’s a risk. If you do something wrong or use it at all you will be disciplined. (Don’t experiment - run and hide.)
It’s been a pleasure working with this company. It is a sad day but I wish you all the best.
That’s all for this week…
Thanks for reading. I hope there was something interesting in it for you.
Antony
P.S. I’m doing a free lunchtime webinar on AI and marketing strategy for the Chartered Institute of Marketing soon. Details below:
Unlock the potential of generative AI in your marketing strategy with the Chartered Institute of Marketing (CIM) webinar.
Whether you’re a seasoned marketer or just starting out, this session is designed to inspire and inform practitioners at all levels.
Date & Time: Monday, March 4, 2024, from 1:00 PM to 1:45 PM GMT
Footnote:
From ChatBlackGPT: AI and facial recognition technologies have increasingly come under scrutiny for their biases, especially regarding their performance and impact on Black individuals and other people of color. These biases can lead to significant disparities in accuracy, fairness, and treatment across different demographic groups. Here are some notable examples highlighting these issues:
1. **Misidentification and False Arrests:** There have been documented instances where facial recognition technology has misidentified Black individuals, leading to wrongful arrests. A notable case involved Robert Williams, a Black man wrongfully arrested in Detroit due to a facial recognition mismatch. This incident raised concerns about the technology's reliability and its implications for Black communities.
2. **Lower Accuracy for Black Faces:** Studies have shown that many facial recognition systems have higher error rates when identifying Black faces compared to white faces. The Gender Shades project, led by Joy Buolamwini at the MIT Media Lab, highlighted that commercial facial recognition software from companies like IBM, Microsoft, and Amazon had higher misidentification rates for darker-skinned and female faces. This discrepancy raises concerns about the inclusivity and fairness of AI systems.
3. **Surveillance and Privacy Concerns:** Black communities have expressed concerns about being disproportionately targeted by surveillance technologies, including facial recognition. The use of these technologies by law enforcement and other entities can exacerbate privacy invasions and the potential for discriminatory practices.
4. **Bias in Datasets:** The bias in facial recognition can often be traced back to the datasets used to train these AI systems. Datasets that lack diversity and disproportionately represent certain racial or ethnic groups lead to systems that perform inequitably across different populations.
For further reading and more in-depth exploration of these issues, here are some citations:
- Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77-91. [Read the study](https://proceedings.mlr.press/v81/buolamwini18a.html)
- Garvie, C., Bedoya, M. A., & Frankle, J. (2016). The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law, Center on Privacy & Technology. [Access the report](https://www.perpetuallineup.org/)
- Raji, I. D., & Buolamwini, J. (2019). Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. [Read the paper](https://dl.acm.org/doi/10.1145/3306618.3314244)
- Stark, L., & Hutson, J. (2022). Racial Equity in Algorithmic Criminal Justice. Duke Law Journal, 72(1), 1-70. [Explore the journal](https://dlj.law.duke.edu/)
These resources provide a foundation for understanding the complexities and ramifications of AI and facial recognition biases. Would you like to learn more about any specific aspect of this issue?