11 Comments
User's avatar
Bea Walker's avatar

A most interesting article. I’ve used AI often but think it’s only as good as the information it’s been fed. To me AI is a useful tool. It’s not going away so we need to use it to our advantage and be aware of the pitfalls.

Bryan Demchinsky's avatar

To Joe’s point: Every time AI is used in whatever capacity, it should be identified as such.

Jeffrey Newman's avatar

There is developing work in this area by @TerryCooke-Davis and @GenZendahl. Worth looking at on Substack. (Please excuse: I’m new to Substack and don’t yet know how to hyperlink names. Nor what restack means!)

Michael Corthell's avatar

What if the likeliest extraterrestrials are not biological beings, but AI lifeforms, and what if humanity is building the very systems through which such an intelligence could enter our world? This essay explores artificial intelligence as threshold, vessel, warning, and possible doorway to first contact. https://essayx.substack.com/p/the-door-we-are-building

Michael Corthell's avatar

What if the likeliest extraterrestrials are not biological beings, but AI lifeforms, and what if humanity is building the very systems through which such an intelligence could enter our world? This essay explores artificial intelligence as threshold, vessel, warning, and possible doorway to first contact. https://essayx.substack.com/p/the-door-we-are-building

Cherie Hart's avatar

I found this article extremely informative. I'm still scared of what the future holds for AI. The other day I spoke with a woman whose voice sounded like an AI-generated one. I asked her if she was a fake operator answering the phone. She was real. The situation was surreal.

Sharon Merrill's avatar

Very helpful information about a somewhat confusing issue. Thanks.

Jeffrey Street's avatar

Bryan (and Joe), another thoughtful piece and a welcome contribution to this space. I’ve read it several times now.

I’m not so much worried about AI as I am pissed off about LLMs in particular. For the most part, they are giant energy-sucking plagiarism machines with — as you both pointed out — a large sycophancy problem (although I don’t see the latter in the illustration accompanying this article; on the contrary, you both look very distinguished).

It was helpful that Joe made clear the distinction between DLMs and LLMs and what they are good and bad at accomplishing. I wondered, though, when Joe mentioned that DLMs are used to analyze large-scale manufacturing processes to make them more energy efficient, if there really is the potential for “a profound impact on climate change.” (The implication in his example, I think, is for a positive outcome. But what’s the trade-off between greater energy efficiency in systems or processes and the massive energy requirements of AI data centres, for example? Not intended as a knock on Joe, as I am sure he is well aware of that concern.)

My complaints about most LLMs start with a loathing of their deeply flawed business model, which depends largely on the monetisation of stolen intellectual property. The use of AI for nefarious purposes, such as influencing elections or public opinion through deep-fake recordings or videos, is also annoying as hell.

What really irritates me is the extent to which business executives have bought into the promises of AI (aptly described as “a mass-delusion event” in an excellent article that appeared last year in ‘The Atlantic’). It’s not that AI can replace humans so much as too many people believe it can; and so it does — although many are discovering that AI isn’t really working out for them as well as they expected. In the meantime, I’m glad you highlighted some of the social and economic harms that are produced as a result.

Incidentally, for more on that subject, I recommended Will Lockett’s Newsletter on Substack.

You explored the need for identifying when AI-generated content is used. That is one of the great issues of our time. It’s all fun and games until such content is released for public consumption and confuses us even more than we are about what is — and isn’t — real.

I’d go on ranting but will save that for the next time we meet (which I hope is soon). Until then, thank you (and Joe, again) for this timely and thought-provoking article.

Joe Neuhaus's avatar

The increased energy requirements you're referring to are specifically attributed to the massive LLMs the technocracy is trying to shove down business' throats. The DLMs being applied in manufacturing are much more targeted and not built for global deployment and millions of users - their goal is to reduce energy consumption and make manufacturing more efficient.

Jeffrey Street's avatar

Thanks for clarifying.

Joe Neuhaus's avatar

Just one correction, it's Cambridge Analytica, not Cambridge Analytics.