As many of you know, each month I put together a list of professional learning opportunities for fellow school library staff. They might be links to video recordings, podcasts, webinars or articles and blog posts. I collect them over the month from my own professional learning network and from my favourite go-to sources of information and learning.
Last month I came across a post on LinkedIn that at first glance looked great. The topic was interesting, all about reading and media literacy in an age of AI. But it continued to hold my attention, for all the wrong reasons. It looked like it was entirely written by AI, specifically ChatGPT.
The rare emojis in the heading were my first clue, followed by the bold text scattered throughout the piece. Then there was the formatting, more emojis in headings, the overall structure, bullet points and the neat little recommendations at the end. The dead giveaway? The final parting link to a “Inspiration for this post taken from …” and a reference in neat APA style. On a whim I decided to look up the reference and sure enough that book doesn’t actually exist. A website by the same name exists but isn’t written by the author listed in the post’s reference.
This LinkedIn post has been “written” and shared by someone who calls themselves an expert – and they may be – and has received many positive comments and likes from other such professionals. It had me wondering, what’s the point of content in an age of AI?
What’s the point of content in an age of AI?
Now, I don’t have a problem with people using AI to write their newsletters, blog posts and articles. It’s a tool and a good one to use, if it’s used as a tool. A tool for brainstorming topics or outlining or editing. But when a post is simply created, copied, pasted and shared as if it’s the work of that author, I’m confused. What’s the point? Likes and hits? Because the content can be recreated for anyone with their own AI tool. And it certainly didn’t bring any personal experience to the table. Maybe some editing happened but when poor practices like mis-crediting a source are used, I have to wonder how much editing happened.
I use AI. A lot. For my workplace and professional work. I have it help brainstorm, draft proposals, list ideas, create lesson outlines, scaffold report comments. But as I teach my students, it’s a tool, not something that should do the whole process for me. So I take pieces and ideas and work with them. Rewrite them. Use ideas and expand on them, constantly reflecting, editing, checking. And when it comes to my writing, all the posts you see here and on my socials and in my newsletters, they are written by hand on my devices, fingers tapping at keys, brain ticking over faster than my fingers can type, written by me. Not because AI can’t do it, but because there is purpose in the process. Writing is a reflective act for me. It helps me grow, to change, to improve. Sharing my work is a process of being honest and collaborative, gaining far more than I give, as we share ideas together and record feedback, not as a means of collecting likes or views.
But is there much point in creating content in an age of AI? I actually asked AI this once and it assured me that, yes there is a purpose so long as you make your work your own, share your lived experiences and make it personal because that can’t be replicated by a chat bot.
Seeing this post also (ironically, given it was about media literacy) highlighted for me the importance of information and digital literacy and AI literacy, something that I am constantly working on with my students and constantly highlighting the importance of that work with my leadership and fellow teachers. I’m curious, did the people who commented on this post realise it was written by AI? Did they care? Did they not have the skills to see this blatantly obviously AI-created post?
And you, my reader, do you care if the posts you read are written by AI? Does it matter to you to know that a human has penned these words?
Further Reading

I don’t usually comment on your blog, but on this I have strong feelings!
I do care if the content is written by AI, especially if it provides the wrong content – how can you really be sure the author knows what it is they’ve published?
I’ve recently come across a few AI websites through a search engine and it makes me distrust the whole process. It goes like this: okay, this website I thought was written by a human with fact based content turns out to be AI. I can’t trust this content so I have scrape my brain of any nuggets that might’ve stuck around and make sure to double-check these things. And whole thing makes me distrust the search engine.
On a different note, I found a writer on substack writing about AI with help of AI. And while the content itself was interesting, it came out so frequently I quickly lost interest.
Did you make the LinkedIn author aware that the article included a fake reference?
Thank you so much for commenting and sharing your thoughts. I totally agree, the AI generated content just makes me switch off, especially when you can tell it’s just being pumped out for the sake of content. And like you said, you have to be so careful not to trust and take on board things that might be incorrect. In this case, I didn’t comment on the article or contact the author, I just reflected and wrote this post. I also chose not to name the author in this post. Makes me wonder, is that something we should be calling out and making people aware? I didn’t feel comfortable doing that in this case, in others I might. Certainly an excellent example I’m looking forward to sharing with my students this year when we talk about information literacy. Thank you again for reading and joining the conversation