Meet the Other Phone. A phone that grows with your child.

Meet the Other Phone.
A phone that grows with your child.

Buy now

Please or to access all these features

AMA

AMA with Professor Nigel Shadbolt and Roger Hampson about their book: As If Human: Ethics and Artificial Intelligence – Tuesday 26 November 2024 6pm-8pm

42 replies

SophiaCMumsnet · 25/11/2024 15:00

Hello,

We’re excited to announce that Professor Sir Nigel Shadbolt and Roger Hampson, authors of As If Human will be joining us for an Ask Me Anything on Tuesday evening to discuss their Book.

Nigel is principal of Jesus College, Oxford, and professor in the Department of Computer Science at the University of Oxford. He is a leading researcher in Artificial Intelligence (AI). He is chairman of the Open Data Institute which he co-founded with Sir Tim Berners-Lee. He was knighted in 2013 for services to science and engineering. He lives in Lymington, Hampshire.

Roger is an academic and public servant and former chief executive of the London Borough of Redbridge. He lives in Greenwich, London. Together in 2018 they published The Digital Ape: how to live (in peace) with smart machines, described as a ‘landmark book’.

As If Human, their most recent book, is a new approach to the challenges surrounding artificial intelligence that argues for assessing AI actions as if they came from a human being.

Intelligent machines present us every day with urgent ethical challenges. Is the facial recognition software used by an agency fair? When algorithms determine questions of justice, finance, health, and defence, are the decisions proportionate, equitable, transparent, and accountable? How do we harness this extraordinary technology to empower rather than oppress?

Despite increasingly sophisticated programming, artificial intelligences share none of our essential human characteristics—sentience, physical sensation, emotional responsiveness, versatile general intelligence. However, Nigel and Roger argue that if we assess AI decisions, products, and calls for action as if they came from a human being, we can avert a disastrous and amoral future. The authors go beyond the headlines about rampant robots to apply established moral principles in shaping our AI future. Their new framework constitutes a how-to for building a more ethical machine intelligence.

To celebrate the publication of As If Human, Yale University Press will be giving away copies to the first 10 users to post questions on the AMA thread. Winners' information will be used by Yale University Press to administer the giveaway in accordance with their privacy policy which you can read here. The giveaway is open to users resident in the UK only and books will be posted out within two weeks after the AMA thread has closed.

Please post your questions below. Nigel and Roger will be answering questions tomorrow evening (between 6pm-8pm). As always, please remember our guidelines - one question per user, follow-ups only if there’s time and most questions have been answered, and please keep it civil.

Thanks,
MNHQ

AMA with Professor Nigel Shadbolt and Roger Hampson about their book: As If Human: Ethics and Artificial Intelligence – Tuesday 26 November 2024 6pm-8pm
AMA with Professor Nigel Shadbolt and Roger Hampson about their book: As If Human: Ethics and Artificial Intelligence – Tuesday 26 November 2024 6pm-8pm
AMA with Professor Nigel Shadbolt and Roger Hampson about their book: As If Human: Ethics and Artificial Intelligence – Tuesday 26 November 2024 6pm-8pm
curlywurly2025 · 25/11/2024 15:47

Hi Roger and Nigel,

Thank you so much for doing this.

Given that AI lacks human characteristics like sentience and emotional responsiveness, how do you propose we effectively apply human moral principles to evaluate its actions, especially in high stakes fields like justice or healthcare? Are there specific principles or frameworks you believe are most suitable for this purpose?

CharismaticMegafauna · 25/11/2024 16:05

How do you develop the moral framework to use as a baseline for assessing AI decisions? Can you give some examples?

Gunnersforthecup · 25/11/2024 16:10

Thank you for this opportunity to discuss AI.

Another issue is the potential for AI to develop some level of independent consciousness.

Do you see this as a risk, at any level?

If this is theoretically possible, is there a risk that the priorities of entities with artificial intelligence might be rather different from human moral priorities? And how would you suggest this is managed?

heldinadream · 25/11/2024 16:15

There's a saying that I've seen that goes something like - we thought that robots would do the housework to free us up to write books and paint pictures, but it seems as though AI is painting pictures and writing books and we're left with only the grunt work. I feel I am in mourning because I see human creativity potentially becoming devalued and that there's no way of stopping this happening. I'd love to hear your thoughts on this aspect of AI. Thank you so much.

Monsteronthehill · 25/11/2024 17:26

Hi Nigel and Roger. Thanks for doing an AMA. If we are going to judge AI as if it were human, could you explain how this would work in everyday situations, like when AI makes a mistake or gives a bad recommendation? Who would be responsible then? I'm thinking of the medical field for this but I guess it would apply to many situations.

AuxArmesCitoyens · 25/11/2024 17:47

Hi Roger, hi Nigel, thanks for doing this. I am in a line of work (translation) that is being directly impacted by AI. I would like to hear your thoughts on the environmental costs of AI and how its use can ever be considered ethical in a world aiming for net zero, given the energy costs of AI are predicted by the IEA to equal that of Japan by 2026.

AuxArmesCitoyens · 25/11/2024 18:10

I have a second question too, if that is OK, about the ethics of LLMs: is there an ethical way a) to tag the data that doesn't rely on exploitation of ghost data workers and b) remunerate human creators for their IP (and let them opt out of data harvesting if they prefer).

AuxArmesCitoyens · 25/11/2024 18:16

It is peak dinner and bathtime but I will be watching this space with interest later this evening!

tilypu · 26/11/2024 08:08

Do you think AI could ever have autonomy for decision making within important sectors such as healthcare and law, and if so, how do you ensure that it remains free from bias?

AuxArmesCitoyens · 26/11/2024 08:34

All great questions so far!

wiwergoch · 26/11/2024 13:58

Hi Roger and Nigel, I'd love to hear your thoughts on the inherent biases within AI - ie the idea that its inherently sexist because the machines learn from content dominated by men. How would you incorporate that into the moral framework? Thanks!

AuxArmesCitoyens · 26/11/2024 16:05

Another great question - also the risk of epistemicide given that it draws so heavily on a colonially dominant language (approx. 93 per cent of GPT-3’s training data is monolingual English).

CharismaticMegafauna · 26/11/2024 17:38

Some very interesting questions so far; I'm looking forward to the discussion.

I used to proofread student essays written by non-native speakers of English. Over the last couple of years, the work has almost completely dried up, which I expect is largely due to ChatGPT.

Flughafenkoenigin · 26/11/2024 17:52

wiwergoch · 26/11/2024 13:58

Hi Roger and Nigel, I'd love to hear your thoughts on the inherent biases within AI - ie the idea that its inherently sexist because the machines learn from content dominated by men. How would you incorporate that into the moral framework? Thanks!

That was my question too. Thanks

AuldCurmudgeon · 26/11/2024 18:02

Do you think conscious AI is possible? Presumably, it couldn't be programmed but would have to emerge from the right stuff. Can you envisage this happening? What's special about our brain's stuff that allows it to generate consciousness? Can we replicate this?

NigelandRoger · 26/11/2024 18:12

CharismaticMegafauna · 25/11/2024 16:05

How do you develop the moral framework to use as a baseline for assessing AI decisions? Can you give some examples?

Thanks for your question - we argue in the book that we already have a number of moral frameworks - our "As If" metaphor invites us to use many of the principles embodied in these framework. In Virtue Ethics concepts such as civility or fairness are important. If we hold our AI systems to these standards then we expect them be courteous, proportionate, transparent and unbiased. If an AI is making decisions on behalf of an insurance company these values should apply.

Experts' posts:
NigelandRoger · 26/11/2024 18:21

AuldCurmudgeon · 26/11/2024 18:02

Do you think conscious AI is possible? Presumably, it couldn't be programmed but would have to emerge from the right stuff. Can you envisage this happening? What's special about our brain's stuff that allows it to generate consciousness? Can we replicate this?

Not any time soon, certainly not our kind of consciousness. We are both materialists - and clearly we are existence proofs that it can arise - but what the right stuff is remains contested. We discuss in the book the nature of our embedding in the world, our embodied existence, our lived experiences - all of which constitutes our sense of self.

Experts' posts:
NigelandRoger · 26/11/2024 18:26

CharismaticMegafauna · 26/11/2024 17:38

Some very interesting questions so far; I'm looking forward to the discussion.

I used to proofread student essays written by non-native speakers of English. Over the last couple of years, the work has almost completely dried up, which I expect is largely due to ChatGPT.

Yes probably - proof reading is the kind of skill that generative AI can display. Reframing content and arguments is more challenging. If content is substantially machine generated we need to know - "one of our principles is that a thing should say what it is and be what it says".

Experts' posts:
NigelandRoger · 26/11/2024 18:33

Flughafenkoenigin · 26/11/2024 17:52

That was my question too. Thanks

Modern generative AI systems like ChatGPT are trained on huge swathes of content - there are inherent biases in this content - there is a great deal of "fine tuning" to eliminate responses that might arise from such bias. This is a constant challenge - we have to invest significant resources to identify and correct this bias. There is no ideal dataset in this regard. Our moral frameworks have to recognise that this is a persistent problem and requiring constant vigilence.

Experts' posts:
NigelandRoger · 26/11/2024 18:35

AuxArmesCitoyens · 26/11/2024 16:05

Another great question - also the risk of epistemicide given that it draws so heavily on a colonially dominant language (approx. 93 per cent of GPT-3’s training data is monolingual English).

Interestingly the challenge going forward is novel human content that the machines have not seen before - or else generated themselves. There will be a premium on authentic human content drawing on the contributions from under-represented cultures.

Experts' posts:
NigelandRoger · 26/11/2024 18:42

tilypu · 26/11/2024 08:08

Do you think AI could ever have autonomy for decision making within important sectors such as healthcare and law, and if so, how do you ensure that it remains free from bias?

In the book we echo John Tasioulas (Director of the Oxford Institute for Ethics in AI) - who argues for the right to a human decision in the age of AI - this is one of our key principles too. There is going to be an increasing use of AI decision making in high stakes situations - here we should always regard the system as augmenting and assisting decision making that is always overseen by humans.

Experts' posts:
Dearover · 26/11/2024 18:48

Are the big tech companies engaging with the need for ethics specialists to work alongside their AI development teams? If so, are they treating it in a similar way to greenwashing or are they actually acting on concerns?

NigelandRoger · 26/11/2024 18:53

AuxArmesCitoyens · 25/11/2024 17:47

Hi Roger, hi Nigel, thanks for doing this. I am in a line of work (translation) that is being directly impacted by AI. I would like to hear your thoughts on the environmental costs of AI and how its use can ever be considered ethical in a world aiming for net zero, given the energy costs of AI are predicted by the IEA to equal that of Japan by 2026.

The ability for these systems to produce reasonable translations across many languages is rather impressive. It is enabling all sorts of realtime interaction between people who remain in their native languages. There is still subtlety, expertise, experience, and wisdom that AI translation systems lack. Would your agree?

As to the environment impact - this has to be seriously addressed. There are different kinds of mitigating this impact. One is to make our computations more energy efficient, another is to make our models smaller whilst retaining most of their performance. Simply building ever larger AI models may encounter a law of diminishing returns. The business models for these systems in some cases are also precarious. But environmental impact is an increasingly important consideration.

Experts' posts:
NigelandRoger · 26/11/2024 19:01

AuxArmesCitoyens · 25/11/2024 18:10

I have a second question too, if that is OK, about the ethics of LLMs: is there an ethical way a) to tag the data that doesn't rely on exploitation of ghost data workers and b) remunerate human creators for their IP (and let them opt out of data harvesting if they prefer).

The question of content exploitation is very topical. One approach would be to equip data with its terms of use, and its provenance. Responsibly and sustainably sourced data could become a thing! Potentially allowing individuals to opt out of data harvesting. Remuneration for human created IP will be/is the source of litigation. Some correction could occur in the same it has with music streaming services such as Spotify.

Experts' posts:
Changedforthetoday · 26/11/2024 19:06

I’m so excited about this. My work is dipping its toe into AI right now and we are trying to develop organisational frameworks and ethics for its use. I am interested to know how you think the NHS will address the use of AI both for clinical decision making as well as supporting the huge swathes of administration the NHS churns out. What should we as users of the NHS ask about how /if they use AI and what it is doing to protect our data.

Swipe left for the next trending thread