There are things like the ideological significance of toenails, which few people talk about and few should.
There are things like half-exponential functions, which few people talk about but many should.
There are things like Deflategate or manspreading or the dresses worn at the Oscars, which many people talk about but few should.
And then there are things like World War II, global warming, black holes, or machine learning, which many people talk about and probably many should.
It's always hard to judge whether a thing in the fourth category is "overhyped" or not: presumably that would require totting up all the hype about the topic, and also everything that's genuinely interesting or important about it, to see whether the first outweighs the second (but even if so, how much does that matter?).
Right now, machine learning offers deep scientific questions (e.g., why does deep learning work as well as it does?), and also an obvious and increasing impact on civilization (from the impending shift to self-driving cars alone, even if we ignored all the other stuff). I'd be tempted to work in machine learning myself, if I weren't doing quantum computing. (Indeed, I started out in AI and machine learning, as an undergrad at Cornell with Bart Selman and then as a grad student at Berkeley with Mike Jordan, before shifting into quantum computing, where I felt like my "comparative advantage" was greater.)
The progress in ML in the past decade---the progress that led to stuff like IBM Watson, AlphaGo, etc.---strikes me as genuine and astounding. On the other hand, at least according to the ML researchers I know, the recent progress has not involved any major new conceptual breakthroughs: it's been more about further refinement of algorithms that already existed in the 70s and 80s, and of course, implementing those algorithms on orders-of-magnitude faster computers and training them with orders-of-magnitude more data.
On the one hand, the fact that decades-old ideas (e.g., backpropagation and its variants) have shown such power when scaled up is of course cause for optimism that we could soon achieve even more amazing feats of AI, by trying the same techniques with yet faster computers and yet larger datasets!
But on the other hand, it's also a reminder that, if we want to know what's going to matter decades from now, we might need to look around for the analogues today of backpropagation in the 70s and 80s---i.e., for the ideas being pursued by a few academic oddballs, which are too new and weird and unproven to have attracted much VC funding or glossy magazine articles, and which many people dismiss for what might be merely contingent reasons of technology rather than anything fundamental.
In the end, I suppose it's less interesting to me to look at the sheer amount of machine learning hype than at its content. Like, almost everyone in the 1950s knew that computers were going to be important, and of course they were right, but they were often wildly wrong about the reasons (e.g., dramatically underestimating the difficulty of humanoid robots, while failing to foresee PCs or the Internet). There's no doubt in my mind that people 30 years from now will agree with us about the central importance of ML, but which aspects of ML will they rage at us for ignoring, or laugh at us for obsessing about when we shouldn't have? I don't know the answers to those questions, but I know that those are the things I'd like to know.