Meet the Other Phone. Protection built in.

Meet the Other Phone.
Protection built in.

Buy now

Please or to access all these features

AIBU?

Share your dilemmas and get honest opinions from other Mumsnetters.

AIBU to wish people would stop writing professional emails with Chat GPT?

278 replies

4pmfireworks · 25/11/2024 04:45

One of my managers writes absolutely everything with Chat GPT and as a result, all her emails are oddly formal and often get people's backs up. The tone is all wrong. I don't think she realises how badly she is coming across - and most of the team don't realise that the reason her communication is so lacking warmth and human touch is because she's telling AI what she wants to say.

She even once sent an email to me to let me know that "Marie Jones (your team leader) will advise you on this matter separately." Oh, THAT Marie Jones?! My team leader?! The one who I share an office with?! Thank God you included her surname and clarified her role or I would not have had a clue which Marie (the only one who works with us) you were talking about.

I've just had a general class update from my child's teacher that has been written with Chat GPT - I guess it saves time and I don't really blame him, but I do find it cringy. Once you spot it, it's so obvious. I would be embarrassed to send it.

I should add that I'm not always entirely sure why it's obviously written by AI. The adjectives are a bit off I think. And the sentence structure is recognisably formulaic and always rather longer than a human tends to write.

If you do this at work, you should know that some of the recipients know exactly what you're doing, and it doesn't look great.

OP posts:
Thread gallery
8
Zanatdy · 25/11/2024 06:37

Chat GP is useful (I have never used it, but use a similar one called Goblin sometimes) but you shouldn’t be using it to communicate with your direct colleagues. It’s always obvious when someone has used it for a job application. You need to go in, and get rid of words you’d never use, as some key words I noticed get used a lot by AI, that day to day people don’t use. Honed being one of them AI seems to love.

I guess it’s difficult in your situation unless someone wants to tell her. Could someone have a discreet word with her manager?

SoftPlaySaturdays · 25/11/2024 06:40

pinksquash13 · 25/11/2024 06:00

@Bjorkdidit it absolutely does. Say I'm writing a newspaper report about the eruption of Mount Vesuvius in 79AD. It would take me ages to research, structure and then write it. Chat gpt does it in seconds and yes I make edits but I find it's pretty good. I don't need to to be 100% accurate as I'm teaching 9 year olds writing skills mostly. So I guess that helps. I don't find it oddly worded but I can change bits I don't like.

But this is the exact sort of thing it's absolutely shit at. It doesn't research anything; it gives a response that looks like a researched answer to a question. It will make up facts, dates, scholars, references. It simply doesn't care as long as it looks "answery".

You'd have to check every single sentence individually for accuracy, plus probably rewording it too. I just don't see how it saves time.

There's endless threads on social media with librarians spending hours searching for rare out of print books only to find they are imaginary, because the person asking generated their reading list on AI.

Don't use it for anything factual that you don't already know about yourself, it just produces rubbish.

taxguru · 25/11/2024 06:42

This is a prime example of letting ai run wild. A tax tribunal case where a taxpayer mistakenly thought ai generated precedent cases were real.

https://www.lawgazette.co.uk/news/ai-hallucinates-nine-helpful-case-authorities/5118179.article

This kind of thing illustrates the real risk of using real data in ai which it may pickup and use or refer to for other peoples ai. Confidentiality is going to be a massive problem in the future if people glibly put personal details or confidential business details into an ai generator.

A robot hand types on a laptop keyboard

AI hallucinates nine ‘helpful’ case authorities

Judge accepts that appellant was unaware citations provided by 'friend in a solicitor's office' were fabrications.

https://www.lawgazette.co.uk/news/ai-hallucinates-nine-helpful-case-authorities/5118179.article

BoobyDazzler · 25/11/2024 06:43

Guavafish1 · 25/11/2024 06:30

I love it as someone who is dyslexic!

I use it as a spelling and grammar checker only and then listen to what has been re-written. I always put my own work in first… then ask for a uk grammar check.

you can change the tone to make it less formal.

to be honest emails can come across rude without ChatGPT

As someone else with dyslexia I agree! I use it all the time but only to check my own work as I have a natural tendency to write a bit like Yoda, and struggle with punctuation.

Bjorkdidit · 25/11/2024 06:44

If people are using AI in job applications it's all going to fall apart at the interview anyway when it becomes apparent that the applicant has misrepresented their knowledge and competence.

I've just asked ChatGPT a couple of questions relating to my job and after a couple of tweaks in the question it came up with material that to the layperson or even knowledgeable beginner would look plausible but as a subject matter expert I could see several inaccuracies.

It also seemed to think I'd asked 'tell me everything you know about X' when I'd actually asked it to answer one simple question about one particular aspect of the field.

SoftPlaySaturdays · 25/11/2024 06:46

@Diomi Honestly, it does not summarise long things well. It will hallucinate things that aren't there, and miss important elements. It produces something that looks like a summary.

We have this issue with our students summarising articles and the summaries are awful.

GrammarTeacher · 25/11/2024 06:47

pinksquash13 · 25/11/2024 05:41

@VashtaNerada I don't see the problem. It's great for writing models. Are you too good for a time saving resource?

I do occasionally use it to write emails and although I can see where you're coming from, I don't think I care because it saves me time. I would definitely edit though.

It is not great for writing models. It's really bad at creative writing as it's not truly generative in the way humans are. It's not imaginative. It doesn't think. It scrapes previous work and uses the most likely next word. It is therefore cliched.
When it comes to analytical work it has the appearance of analysis but doesn't actually say very much.

56Chandeliers · 25/11/2024 06:47

The only use I’ve found for it so far is when I’m looking at a blank page for a presentation or document and have no idea where to start.

Emails, essays, articles where I know what I want to say are different. Prompting, fact-checking and editing are all tedious and I’d rather just do the whole thing myself. This is before you get onto the use it or lose it aspect of researching and writing. The thought of becoming skilled at bloody prompting but useless at actually conceiving and executing something is depressing.

And I can’t see how you’d save much, if any, time in the Mount Vesuvius example. I certainly wouldn’t rely on it for facts. I used it for a similar task relating to Roman emperors and asked a question based on a very specific period - it gave a bland, unoriginal answer based on emperors outside that period.

No doubt it will get better and can then be used for dull tasks, but right now the main effects seem to be unleashing a tidal wave of dross and deskilling.

12BottlesOfVintageChampagne · 25/11/2024 06:48

A colleague was recently on an online interview panel for an academic role, and it quickly became apparent that the interviewee was using AI to generate their answers. Apparently, they claimed technical difficulties, so said "Sorry, can you repeat that?" each time and then gave oddly formal replies that were very obviously being read from a screen. They didn't get the job.

Bigredcombine · 25/11/2024 06:49

I have to deal with complaints that come in to my organisation. Sometimes they are fair enough and I write an email apologising and give them their money back. Sometimes (more and more often nowadays!) people are just chancing their arm and seeing what they can get away with. On these occasions ChatGPT is my friend. I don't want to waste my time carefully crafting an email when ChatGPT will tell them in too many words while they won't have a refund. It's also good for 'sorry you didn't get the job emails'.
But yes - emails to colleagues, nah, don't do that.

MollyButton · 25/11/2024 06:50

We're not allowed ChatGP at work but can use Microsoft copilot.
Last week I asked it to help with an email to someone senior - really as a test and to see if my tone was correct.
The biggest problem was it insisted on starting "I hope this finds you well..."
As this is someone I communicate several times a day, and I was only sending an email as it was something we needed a written record of. The rest of the tone was "off" for communication with this person too. So my original was better.
But ChatGP has been useful in condensing down writing for a correct word count. And I find Grammarly useful too.
I also like CoPilot for summarising recorded meetings and extracting action points.

Delorian · 25/11/2024 06:52

I don't think it's going away and is great for people with dyslexia.

The key is getting the prompts right. You can't just say "respond to this email" you have to say "respond to this email in the style of a British middle manager who is mildly sarcastic and doesn't really like Steve but needs to pretend they do"

DanielaDressen · 25/11/2024 06:52

It definitely makes stuff up according to Dd who asked it for some articles on a niche subject she’s writing a research proposal for. It gave her a list journal article titles and authors which looked great. But when she’s copied and pasted those article titles into google scholar and her university library website they don’t exist. 🤷‍♀️

HeadinSand81 · 25/11/2024 06:52

I received a CV for a role and it was oddly wordy. You could cut half of it out. It was clearly written by AI. But the applicant was in the sector and of the right level of experience (years wise). So we decided to first interview him.

He was terrible, arrogant, no substance to his answers. I did consider ending the interview as it was embarrassing. However I thought I'd push it a bit more to see if he had any decent examples. His examples were even more arrogant. In reality I think he had a big case of imposter syndrome. He was a clear 'No'.

igiveuptrying · 25/11/2024 06:53

I am applying for jobs so use ChatGPT with the job description and my cv to write a cover letter. I then go though and change it to make it sound less generic. Would never send it as is but it’s easier to work with the generated letter than start from scratch.

EmpressaurusKitty · 25/11/2024 06:54

Bjorkdidit · 25/11/2024 05:55

But it produces absolute garbage that is oddly worded plus you obviously need to check it for accuracy. Does it really save time compared with writing it yourself?

Plus in a work environment its likely breaching confidentiality as you're giving that information to ChatGPT.

People use it to write posts on here and it's really easy to spot. There was one a few days ago where the OP came back to ask why they'd not got many useful replies and we all said it was because we didn't want to have a conversation with AI.

Which thread was that?

DanielaDressen · 25/11/2024 06:57

I was on the interview panel for a post recently where applicants had been given a topic to do a presentation on. One of the applicants started off by saying they’d asked ChatGPT what it thought and summarised the ChatGPT findings but then did go on to analyse and discuss what ChatGPT had said.

I actually liked it, she showed she wasn’t afraid to use new technology and I’m pretty sure the discussion points were her own. But it was a bold thing to do I guess, the potential for someone on the panel to be outraged is quite high. 🤷‍♀️. We’re not in a technology type sector.

Sofa1000 · 25/11/2024 06:59

I help with recruitment for large civil service campaigns. I certainly don’t bin applications where I suspect AI has been used. It’s a tool and why not use it? Shows awareness of how to use technology.
I do hate having to wade through a statement where it’s all words with little evidence or content. Those get marked down.
E.g. In one large campaign there were so many AI users that submitted with a phrase something like ‘my skills and qualifications align perfectly with the requirements of the role’. Fine to use that to segue in to your evidence but not just to chuck it in to a massive pointless word salad.

ArabellaScott · 25/11/2024 07:01

Garbage in, garbage out.

YANBU, OP.

Using AI may work for some thngs. In other things, its fraudulent.

NINP · 25/11/2024 07:01

TwoLeftSocksWithHoles · 25/11/2024 06:23

I understand that you might have some reservations about ChatGPT, and I appreciate your feedback! While it's true that AI has its limitations, it’s also designed to continuously improve. ChatGPT can handle a wide range of tasks, from answering questions and explaining complex topics to assisting with creative writing and brainstorming ideas. However, it might not always be perfect in terms of nuance or context, as it learns from patterns in data rather than human experience.
If you have specific concerns or examples where it didn't meet your expectations, feel free to share! I’d love to help address them and clarify how it works. AI like ChatGPT is constantly evolving, and feedback like yours is important for making it better.

Thanks ChatGPT

InfoSecInTheCity · 25/11/2024 07:03

I had to tell our sales team recently that it was really obvious that the bid response was written by chat GPT and then provide them with a demo of how no matter what question I posed it would provide a similarly formatted and structured response that made it obvious how it had been written.

NoNoNona · 25/11/2024 07:03

I had an argument with Chat GPT recently and it got quite annoyed with me.
When I opened it, it stated that its knowledge only went up to January 2022. Fair enough.
I asked it a simple question (on something that happened after January 2022) and it came back with the wrong answer. When I told it the right one, it started to argue. It was very funny, but it was "gaslighting" gone mad.
The correct response to my original question should have been "to my knowledge this is right, but my knowledge ends in January 2022."
What really amuses me, though, is the work environment. We are absolutely forbidden to use AI at work (AFC for large bank) but the intranet is jam-packed with articles on which departments are using AI and all the latest on exploiting AI at work.

BarbaraHoward · 25/11/2024 07:09

I find it really useful. I used it to edit a piece of writing recently - I'm very waffly and was over the word count, so I asked ChatGPT to edit it and kept some suggestions and ignored others.

I've used it to plan a project for my students. It helped with both idea generation and structure. I've used copilot to generate multiple choice questions for a given set of lecture slides.

It's a great support tool.

ShinyPebble32 · 25/11/2024 07:09

It’s a very useful tool for people who are time poor, or not confident writers.You might find it handy for elements of your own job, why but try it!
Your boss could probably use it better, eg ask it to use a warmer tone for the people who need it (including you by the sound of it), and a factual tone for people who just want the info. However in itself it’s not a bad thing to use.
My role involves writing short form text descriptions and I utilise it as a tool occasionally, when I am having writers block - either to create the skeleton text that I then make my own by adding suitable adjectives, or to do the descriptive bit which I can then tweak and add the facts to. We don’t need to worry about it taking our jobs just yet, it sounds very Americanised, stiff and uses some archaic words and phrases - but there’s no reason we can’t use to to help us.

Diomi · 25/11/2024 07:12

SoftPlaySaturdays · 25/11/2024 06:46

@Diomi Honestly, it does not summarise long things well. It will hallucinate things that aren't there, and miss important elements. It produces something that looks like a summary.

We have this issue with our students summarising articles and the summaries are awful.

Humans don’t summarise long things well either. I blame the long things.