wow that's a long article! I can't get through it all so I can't really comment on it. Although I agree with part of your quote that we should remember that "work" in our society is dehumanizing and replacing humans in the workflow is just a logical extension of the dystopia we're already in.
I do think in general that Tim O'Reilly was wrong and out of touch for allowing DARPA to try to join polite society through Maker Faire, and O'Reilly personally cheered DARPA. While it's not surprising that someone in his position would be that way, it's objectionable for me and indicates an important lack of awareness of the danger of global militarism.
I think it's important for people to realize that just because there are useful side-effects of AI like whatever cool things they've been able to use it for, doesn't mean that presently its primary task and reason for existing (the enormous investments being made by the very worst entities in our society) is for the most evil things you can imagine.
Go ahead and think of the most evil things you can, and then look up Lavender AI and "Where's Daddy?" which were used by Israel to target civilians in Gaza.
There are plenty of examples closer to home, being used by Musk to target democracy and being used by Stephen Miller to target undocumented Americans, and they are RAPIDLY moving toward using this tech to target dissidents of any kind.
Talking about AI without acknowledging the existential threat we're facing right now (slowed only by the incompetence of our attackers) is whistling past the graveyard.
-jake