Stewart McGown over on Github has published code to break up encode files as Base64 binary data and store them across multiple Google Docs, which aren't counted towards your Google Drive storage quota.
Researchers at Ben-Gurion University in Israel have created a drone that can seemlessly transition from flying to driving. The FStar drone uses the same 4 motors to power the flight propellors as well as the wheels, and it can also adjust the height at which it drives.
Wolfram have launched their Wolfram Engine as a free package for developers. Described by Stephen Wolfram himself on his blog, the Wolfram engine implements the Wolfram Language, which in turn offers a huge range of computational intelligence and algorithmic processing, access to the Wolfram Knowledgebase as well as over 5000 abstracted functions like machine learning, visualisation and image computation.
Forbes have published a piece about how Ford Motor Company have been using AI for 15 years in their manufacturing and quality assurance processes. More recently, Ford use AI to manage inventory, to simulate their motorsport vehicles before track testing and of course, in their fleet of cars to manage their all-wheel drive system among other things. They expect to launch their self-driving car fleet in 2021.
Scientists have published the results of the study of the NASA twin astronauts Scott and Mark Kelly in Science Magazine. The experiment studies the physiological, molecular and cognitive changes that can happen to a human during a prolonged space mission. Scott Kelly was in the International Space Station for one year while his twin Mark stayed on earth, which allowed scientists to compare the effects of the space mission between the two identical twins.
The New York Times ran an experiment using a public video stream from Bryant Park in Midtown Manhattan. They collected publicly available photos of people that worked near the park and then ran one day's worth of footage from the public video stream through Amazon's commercial facial recognition service at the cost of approx $60. Over a nine hour period it detected 2,750 faces, including an 89% match of a college professor from his employer's website headshot.
Scientists at the Event Horizon Telescope project released the first ever picture of a black hole this week! Using a data from an array of telescopes around the globe, the team combined the massive amounts of data to create the image of M87. Dr. Katie Bouman created the machine learning interferometry algorithm used to combine the data and produce the image of the event horizon.
Pete Warden from the Tensorflow Lite team has published a post on his blog about using Tensorflow Lite on a microcontroller board to do voice recognition. The machine learning model in this case uses 20KB of flash storage with Tensorflow light using 25KB and needing 30K of RAM to run.
Boston Dynamics released footage of its Handle robot moving boxes around on a factory floor. The robot uses a counter weight to balance boxes up to 15KG similar to how a T-Rex used its tail while in motion. I also think it does a mean Ostrich impression, however it's no Colin Stiles!
Alex Bainter has a good explainer post on Medium about how he used Markov Chains to generate a much longer version of Aphex Twin's aisatasana. Using a JSON representation of a MIDI file of the track, he built a Markov Chain system to generate over four million unique phrases which can play for over 451 days without repetition of a phrase. The result is available to listen to on generative.fm.