I've generally been impressed with the abilities of virtual assistants in recent years - Siri, Alexa, Cortana, "Okay Google" - to come up with the right answer to real, factual questions asked verbally in natural language by flesh-and-blood humans. However, it turns out we shouldn't trust those answers quite yet. Tom Scocca writes an account of his misadventures regarding an article he wrote, correcting a widespread falsehood about the time required to caramelize onions. He found that Google was extracting a quote his article specifically identified as false, using that as the "correct" answer, and crediting Tom for it. A convoluted path, and it'd require impressive artificial intelligence to parse the correct context - but that's what's needed if an AI assistant is to be truly trusted. It's since been corrected - possibly manually? - and the right answer is shown in the image above. But be warned: AI just isn't quite there yet.