Walmart’s test store for new technology, Sam’s Club Now, opens next week in Dallas

Walmart’s warehouse club, Sam’s Club is preparing to open the doors at a new Dallas area store that will serve as a testbed for the latest in retail technology. Specifically, the retailer will test out new concepts like mobile checkout, an Amazon Go-like camera system for inventory management, electronic shelf labels, wayfinding technology for in-store navigation, augmented reality, and artificial intelligence-infused shopping, among other things.

The retailer first announced its plans to launch a concept store in Dallas back in June, which was then said to be a real-world test lab for technology-driven shopping experiences. Today, the company is taking the wraps off the project and is detailing what it has planned for the new location, which goes by the name “Sam’s Club Now.” Like other Sam’s Club stores, consumers will need a membership to shop at Sam’s Club Now. But how they shop will be remarkably different. Instead
Continue reading "Walmart’s test store for new technology, Sam’s Club Now, opens next week in Dallas"

NYU and Facebook team up to supercharge MRI scans with AI

Magnetic resonance imaging is an invaluable tool in the medical field, but it’s also a slow and cumbersome process. It may take an fifteen minutes or an hour to complete a scan, during which time the patient, perhaps a child or someone in serious pain, must sit perfectly still. NYU has been working on a way to accelerate this process, and is now collaborating with Facebook with the goal of cutting down MRI durations by 90 percent by applying AI-based imaging tools. It’s important at the outset to distinguish this effort from other common uses of AI in the medical imaging field. An X-ray, or indeed an MRI scan, once completed, could be inspected by an object recognition system watching for abnormalities, saving time for doctors and maybe even catching something they might have missed. This project isn’t about analyzing imagery that’s already been created, but rather expediting its creation
Continue reading "NYU and Facebook team up to supercharge MRI scans with AI"

Body scanning app 3DLOOK raises $1 million to measure your corpus

3D body scanning systems have hit the big time after years of stops and starts. Hot on the heels of Original Stitch’s Bodygram, another 3D scanner, 3DLOOK, has entered into the fray with a $1 million investment to measure bodies around the world. The founders, Vadim Rogovskiy, Ivan Makeev, and Alex Arapovd, created 3DLOOK when they found that they could measure a human body using just a smartphone. The team found that other solutions couldn’t let them measure fits with any precision and depended on expensive hardware. “After more than six years of building companies in the ad tech industry I wanted to build something new which was not a commodity,” said Rogovskiy. “I wanted to overcome growth obstacles and I learned that the apparel industry had mounting return problems in e-commerce. 3DLOOK’s co-founders spent over a year on pure R&D and testing new approaches and combinations of
Continue reading "Body scanning app 3DLOOK raises $1 million to measure your corpus"

Computer vision researchers build an AI benchmark app for Android phones

A group of computer vision researchers from ETH Zurich want to do their bit to enhance AI development on smartphones. To wit: They’ve created a benchmark system for assessing the performance of several major neural network architectures used for common AI tasks. They’re hoping it will be useful to other AI researchers but also to chipmakers (by helping them get competitive insights); Android developers (to see how fast their AI models will run on different devices); and, well, to phone nerds — such as by showing whether or not a particular device contains the necessary drivers for AI accelerators. (And, therefore, whether or not they should believe a company’s marketing messages.) The app, called AI Benchmark, is available for download on Google Play and can run on any device with Android 4.1 or higher — generating a score the researchers describe as a “final verdict” of the device’s
Continue reading "Computer vision researchers build an AI benchmark app for Android phones"

Industrial robots startup Gideon Brothers raises $765K led by TransferWise co-founder

Gideon Brothers, an ambitious startup out of Croatia that is building autonomous robots to put to work in warehouses and other industrial logistics, has quietly raised $765,000 in funding. The round is led by TransferWise co-founder Taavet Hinrikus, who has become an increasingly active investor, recently backing fintech Cleo, legal tech startup Juro, and satellite company Open Cosmos. Ex-Wired U.K. editor David Rowan and a number of unnamed Croatian angels have also participated in Gideon Brothers’ seed round. Founded in early 2017 and comprising a 40-plus team of deep learning and robotics experts — which includes 5 PhDs and 27 Masters of Hardware and Software engineering and other related disciplines — the company is developing an AI-powered robot for various industrial applications. Dubbed “The Brain,” the technology combines 3D computer vision and deep learning to enable Gideon Brothers’ robots to be aware of their environment and
Continue reading "Industrial robots startup Gideon Brothers raises $765K led by TransferWise co-founder"

Football matches land on your table thanks to augmented reality

It’s World Cup season, so that means that even articles about machine learning have to have a football angle. Today’s concession to the beautiful game is a system that takes 2D videos of matches and recreates them in 3D so you can watch them on your coffee table (assuming you have some kind of augmented reality setup, which you almost certainly don’t). It’s not as good as being there, but it might be better than watching it on TV. The “Soccer On Your Tabletop” system takes as its input a video of a match and watches it carefully, tracking each player and their movements individually. The images of the players are then mapped onto 3D models “extracted from soccer video games,” and placed on a 3D representation of the field. Basically they cross FIFA 18 with real life and produce a sort of miniature hybrid. Considering the source data —
Continue reading "Football matches land on your table thanks to augmented reality"

What’s under those clothes? This system tracks body shapes in real time

With augmented reality coming in hot and depth tracking cameras due to arrive on flagship phones, the time is right to improve how computers track the motions of people they see — even if that means virtually stripping them of their clothes. A new computer vision system that does just that may sound a little creepy, but it definitely has its uses. The basic problem is that if you’re going to capture a human being in motion, say for a movie or for an augmented reality game, there’s a frustrating vagueness to them caused by clothes. Why do you think motion capture actors have to wear those skintight suits? Because their JNCO jeans make it hard for the system to tell exactly where their legs are. Leave them in the trailer. Same for anyone wearing a dress, a backpack, a jacket — pretty much anything other than the bare minimum
Continue reading "What’s under those clothes? This system tracks body shapes in real time"

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner. It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t. But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system
Continue reading "Facebook’s new AI research is a real eye-opener"

Teaching computers to plan for the future

As humans, we’ve gotten pretty good at shaping the world around us. We can choose the molecular design of our fruits and vegetables, travel faster and farther and stave off life-threatening diseases with personalized medical care. However, what continues to elude our molding grasp is the airy notion of “time” — how to see further than our present moment, and ultimately how to make the most of it. As it turns out, robots might be the ones that can answer this question. Computer scientists from the University of Bonn in Germany wrote this week that they were able to design a software that could predict a sequence of events up to five minutes in the future with accuracy between 15 and 40 percent. These values might not seem like much on paper, but researcher Dr. Juergen Gall says it represents a step toward a new area of machine learning
Continue reading "Teaching computers to plan for the future"

AI edges closer to understanding 3D space the way we do

If I show you single picture of a room, you can tell me right away that there’s a table with a chair in front of it, they’re probably about the same size, about this far from each other, with the walls this far away — enough to draw a rough map of the room. Computer vision systems don’t have this intuitive understanding of space, but the latest research from DeepMind brings them closer than ever before. The new paper from the Google -owned research outfit was published today in the journal Science (complete with news item). It details a system whereby a neural network, knowing practically nothing, can look at one or two static 2D images of a scene and reconstruct a reasonably accurate 3D representation of it. We’re not talking about going from snapshots to full 3D images (Facebook’s working on that) but rather replicating the intuitive and
Continue reading "AI edges closer to understanding 3D space the way we do"

How Facebook’s new 3D photos work

In May, Facebook teased a new feature called 3D photos, and it’s just what it sounds like. However, beyond a short video and the name, little was said about it. But the company’s computational photography team has just published the research behind how the feature works and, having tried it myself, I can attest that the results are really quite compelling. In case you missed the teaser, 3D photos will live in your news feed just like any other photos, except when you scroll by them, touch or click them, or tilt your phone, they respond as if the photo is actually a window into a tiny diorama, with corresponding changes in perspective. It will work for both ordinary pictures of people and dogs, but also landscapes and panoramas. It sounds a little hokey, and I’m about as skeptical as they come, but the effect won me over quite
Continue reading "How Facebook’s new 3D photos work"

‘SmartLens’ app created by a high schooler is a step towards all-purpose visual search

A couple of years ago I was eagerly expectant of an app that would identify anything you pointed it at. Turns out the problem was much harder than anyone expected — but that didn’t stop high school senior Michael Royzen from trying. His app, SmartLens, attempts to solve the problem of seeing something and wanting to identify and learn more about it — with mixed success, to be sure, but it’s something I don’t mind having in my pocket.

Royzen reached out to me a while back and I was curious — as well as skeptical — about the idea that where the likes of Google and Apple have so far failed (or at least failed to release anything good), a high schooler working in his spare time would succeed. I met him at a coffee shop to see the app in action and was pleasantly surprised, but a little

Continue reading “‘SmartLens’ app created by a high schooler is a step towards all-purpose visual search”

Want to fool a computer vision system? Just tweak some colors

Research into machine learning and the interesting AI models created as a consequence are popular topics these days. But there’s a sort of shadow world of scientists working to undermine these systems — not to show they’re worthless but to shore up their weaknesses. A new paper demonstrates this by showing how vulnerable image recognition models are to the simplest color manipulations of the pictures they’re meant to identify. It’s not some deep indictment of computer vision — techniques to “beat” image recognition systems might just as easily be characterized as situations in which they perform particularly poorly. Sometimes this is something surprisingly simple: rotating an image, for example, or adding a crazy sticker. Unless a system has been trained specifically on a given manipulation or has orders to check common variations like that, it’s pretty much just going to fail. In this case it’s research from the University of
Continue reading "Want to fool a computer vision system? Just tweak some colors"

Who’s a good AI? Dog-based data creates a canine machine learning system

We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog. It’s a collaboration between the University of Washington and the Allen Institute for AI, and the resulting paper will be presented at CVPR in June. Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In
Continue reading "Who’s a good AI? Dog-based data creates a canine machine learning system"

[Podcast] Eran Shir, Founder of DashCam Maker Nexar

A conversation with Eran Shir, Founder, and CTO of Nexar, to talk about his company’s connected dash cam, how it works to improve safe driving, and what a world full of cameras + intelligence means for our future. Eran explains why computer vision matters and why it also can go wrong pretty fast. Disclosure: Nexar is part of the True Ventures portfolio of startups.

Here’s how Uber’s self-driving cars are supposed to detect pedestrians

A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring. Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything. Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and
Continue reading "Here’s how Uber’s self-driving cars are supposed to detect pedestrians"

GrokStyle’s visual search tech makes it into IKEA’s Place AR app

GrokStyle’s simple concept of “point your camera at a chair (or lamp, or table…) and find others like it for sale” attracted $2 million in funding last year, and the company has been putting that cash to work. And remarkably for a company trying to break into the home furnishing market, it landed furniture goliath IKEA as its first real customer; GrokStyle’s point-and-search functionality is being added to the IKEA Place AR app. What GrokStyle does, in case you don’t remember, is identify any piece of furniture your camera can see — in your house, at a store, in a catalog — and immediately return similar pieces or even the exact one, with links to buy them. I remember being skeptical last year that the product could possibly work as well as they said it did. But a demo shut my mouth real quick. The growing team is led
Continue reading "GrokStyle’s visual search tech makes it into IKEA’s Place AR app"

SignAll is slowly but surely building a sign language translation platform

 Translating is difficult work, the more so the further two languages are from one another. French-Spanish? Not a problem. Ancient Greek-Esperanto? Hard. But sign language is uniquely difficult because it is fundamentally different from spoken and written languages. All the same, companies like SignAll are working hard to make accurate, real-time machine translation of ASL a reality. Read More