This essay originally appeared as an answer to a question posed in Quora, posted October 15, 2016. Though I am a co-founder of the educational technology company BrainQuake, mentioned in my reply, I wrote this article as a mathematician at Stanford University.
According to this academic research paper from the Netherlands, published in 2015, an extensive search of the research literature found that there had been essentially no peer-reviewed, published, scientific studies of game-based learning efficacy that both showed efficacy and met the requirements of objective research generally accepted by the scientific community.
That same year, 2015, two studies of BrainQuake's launch math learning app Wuzzit Trouble were published, one by researchers at Stanford University, the other at Tampere University of Technology in Finland. Both found evidence for significant learning gains. (Full disclosure: Wearing my Stanford researcher hat, I collaborated in the Finnish study, but to conform to Stanford's very strict rules about potential conflict of interest, as a developer of Wuzzit Trouble, I was not involved in the gathering or analysis of the data.) Because those studies (you can access the peer-reviewed published reports here) did meet the requirements of peer-reviewed, objective, comparison-group research, I believe Wuzzit Trouble was (at least at the time) the only math learning game to have independent evidence of efficacy meeting the normal requirements of objective scientific research. (Had the Dutch survey been carried out a year later, I believe those studies would have been included as the only two successes.)
Those two research results almost certainly played a major role in BrainQuake being awarded a (highly competitive) $1M, 2.5 year contract by the US Department of Education to develop two further math learning games and build out our overall product line.
Other math learning games that have been found effective by studies that, while perhaps not meeting the exacting standards of those two Wuzzit Trouble studies (at least if that Dutch paper I cited above is correct in its assessment), are at least sufficient to convince me that they are surely effective, are the ST Math Jiji games by Mind Research, Motion Math, DragonBox, Ko's Journey, and Dreambox.
I doubt BrainQuake is the only learning games developer whose products could be independently, and rigorously, demonstrated to yield positive learning outcomes. Rather, I believe we are simply the first developer that has arranged for our product to be subjected to such stringent testing. In particular, having played them all, I expect the games I mentioned in the last paragraph would all pass such a test. (Remember, I am basing my remarks on that Dutch study. It is possible they missed some good games.)
I am equally sure that the vast majority of "learning games", in contrast, would not pass that kind of test.
While there is at present a lot of uncertainty about the efficacy of "learning games" -- and a ton of unsubstantiated hype -- I look forward confidently to a growing market acceptance of the necessity of rigorous scientific studies, carried out at universities, independent of the game companies, for any game that claims to improve learning.
We will know when that requirement is accepted when the Apple App Store and Google Play include peer-reviewed, published, independent, comparison group, scientific research of efficacy in deciding which learning apps to feature and promote. Until then, learning-app evaluation for marketing is (I suspect) largely a matter of download figures, and hence to some extent a measure of a company's marketing budget. :(
NOTE: My answer, like that Dutch study, excludes the myriad of "animated flash cards" and "fast action shooter" games that provide repetitive practice of basic skills. Rather, I focus on learning mathematics, as articulated in the US National Academy of Sciences 2000 volume Adding It Up.