As humans, it's easy for us to understand a bodily position or a pose. For machines, not so much. But thanks to machine learning, Google has managed to make computers mirror your every move and match it with an image with a similar pose. It's easy for a machine to detect objects in their stationary state. Recognising moving parts is a whole different ball game. Google's TensorFlow team has come out with an AI experiment called Move Mirror that uses Google's PoseNet neural network to detect poses and stances and tries to imitate them.
PoseNet uses a machine learning model that recognises the position of a human body by analysing and deducing where different parts and joints of a body are in photo or a video. The model then matches to a set of 80,000 photos, writes Irene Alvado, a creative technologist at Google Creative Lab in a blog post. The machine learning model is computed on-device, or in your browser and Google claims it doesn't store images or send them to a server.
Move in front of your webcam or phone's front camera, and Move Mirror will try to match your every move in real time to images of people making similar poses. Almost like a mirror that matches your move with other people from the world of sports, martial arts, acting and more. The result can be captured in a GIF and shared with friends.
Move Mirror is an example of a computer vision technique called pose estimation. The PoseNet technology is available for developers to experiment with. It is one of many. such Google AI experiments that the company releases from time to time to show off the progress made in the field of artificial intelligence.
Play around with Move Mirror here.