Last week, our Data Science Specialist Max Pagels and Data-driven Design Specialist Lassi A. Liikkanen shared their experiences with machine learning at the Alma Talent Machine Learning event in Helsinki. For those of you who couldn’t make it, here’s what they discussed.
— Lassi A Liikkanen (@lassial) October 10, 2017
Machine Learning using cloud services
Leveraging the AI/machine learning services provided by Amazon Web Services and Google Cloud was the focus of Max’s talk and demo. AWS and Google Cloud provide a host of fully managed, ready-made AI services, but also hybrid approaches where one can train and serve custom machine learning models. ML models require some pretty hefty hardware to train in a reasonable amount of time — cloud compute helps you iterate faster, and more cost-efficiently compared to on-premise solutions. In his talk, Max demonstrated how easy it is to get up and running by making a multi-class classification model using AWS Machine Learning and Google Cloud Machine Learning engine (TensorFlow), respectively.
Cloud vendors are betting heavily on AI, and their offerings — although overlapping — have their pros and cons. The slides from Max’s talk, including a comparison matrix, can be found on SlideShare:
The impact of ML on interaction design and user interfaces
The impact of AI and ML on digital service user experience and interaction designers’ work was the focus of Lassi’s talk. The talk presented three themes Lassi has explored before around the applications of machine learning and the design patterns required to support new kind of digital services. In addition, Lassi discussed how natural interfaces, whether speech, gesture or gaze-driven, pave the way towards collaboration of humans and intelligent systems.
The new development here is that machine learning allows the creation of interfaces that can understand user more quickly or totally transform how certain rather complicated tasks are carried out. The example Lassi highlighted was from collaborative robotics, a new domain pioneered by Rethink Robotics, an American intelligent industrial robots company founded by former MIT Robotics professor Rodney Brooks.
The two revolutionary ideas for interaction design embedded in Rethink Robotics products relate to human-robot collaboration and their configuration. These features are in response to a problem created by traditional black box design of industrial robot intelligence. The old-fashioned approach was to program robots with great effort and detail. The robot programming was an expertise of itself and factory floor workers knew little how the robot was programmed or what it was doing.
The first idea of collaborative robots is to teach them in a “natural” way, by manually guiding their “hands” and “fingers” through three-dimensional space, without writing a line of code. The second is to reveal the thoughts, or intentions, of the robot with a display mimicking a human face. The gaze of the robot, totally dysfunctional for the machine, allows human workers to get a sense of meaning and predict its actions. That’s also a natural way to do things.
This is why collaborative robots make such a great example for the future of designer-AI collaboration in the future. We will be working with artificial narrow intelligence applications to get our stuff done more quickly.
Want to learn more? SC5 offers on-premise trainings in machine learning & data-driven design — just reach out and ask!