Over on Substack, I post my thoughts on writing, publishing, & storytelling. Here are some recent highlights:
It’s very important to build a community around your book journey early. Look across your networks of colleagues, clients, industry peers, board members, university alumni groups, community organisations, friends and family. Be active on whatever social media channels they’re engaged with, and ask them to interact with you and share your book across their networks. Be clear about what you want them to do, when and how. Provide them with whatever they need to make it easy for them to help you.
I rarely mention my books on personal social media accounts. I certainly don’t ask my friends to market for me.
Is it better to create value or interest so people want to share, making it organic?
Selling your book is a long-term exercise. You should allocate one or two hours per week to working on your content marketing plan. If you can’t commit to this, find someone who can handle it for you. Then be sure to regularly review results, engagement and sales.
Definitely long term. Marketing is important. But am I better off writing than playing on social media?
Building relationships or wasting time?
The key thing there is we’re driving commercial interaction. Organisations now at a leadership level should be able to say at board meetings that we’re spending ‘x’ on marketing and getting ‘y’ from social media. It’s not about getting likes and clicks and views, it’s about revenue. They should be able to say we’re getting $10 million from the use of social media. That’s about driving a strategy from the top down to actually understand why they’re on social.
Why are you on social? Is it actually working?
Channeling Homer Simpson.
New York City public schools have restricted access to ChatGPT, the AI system that can generate text on a range of subjects and in various styles, on school networks and devices. As widely reported this morning and confirmed to TechCrunch by a New York City Department of Education spokesperson, the restriction was implemented due to concerns about “[the] negative impacts on student learning” and “the safety and accuracy” of the content that ChatGPT produces.
I immediately thought of this:
When reached for comment, an OpenAI spokesperson said the company is developing “mitigations” to help anyone spot text generated by ChatGPT. That’s significant. While TechCrunch reported recently that OpenAI was experimenting with a watermarking technique for AI-generated text, it’s the first time OpenAI has confirmed that it’s working on tools specifically for identifying text that came from ChatGPT.
I’m curious to learn more about this “watermarking” feature. Will it be a distinctive and readily identifiable pattern of assembling words? What stops a student from learning the algorithm and looking for what to modify? While it’s probably just easier to do the assignment, some will undoubtedly prefer this path.