Machine Learning as Enabler of Design-to-Robotic-Operation

More Info
expand_more

Abstract

This essay promotes Artificial Intelligence (AI) via Machine Learning (ML) as a fundamental enabler of technically intelligent built-environments. It does this by detailing ML’s successful application within three deployment domains: (1) Human Activity Recognition, (2) Object as well as Facial-Identity and -Expression Recognition, and (3) Speech and Voice-Command Recognition. With respect to the first, the essay details previously developed ML mechanisms implemented via Support Vector Machine and k-Nearest Neighbor classifiers capable of recognizing a variety of physical human activities, which enables the built-environment to engage with the occupant(s) in a highly informed manner. With respect to the second, it details three previously developed ML mechanisms implemented individually via (i) BerryNet—for Object Recognition; (ii) TensorFlow—for Facial-Identity Recognition; and (3) Cloud Vision API—for Facial-Expression Recognition; all of which enable the built-environment to identify and to differentiate between non-human and human objects as well as to ascertain the latter’s corresponding identities and possible mood-states. Finally, and with respect to the third, it details a presently developed ML mechanism implemented via Cloud Speech-to-Text that enables the transcription of spoken speech—in several languages—into string text used to trigger pertinent events within the built-environment. The sophistication of said ML mechanisms collectively imbues the intelligent built-environment with a continuously and dynamically adaptive character that is central to Design-to-Robotic-Operation (D2RO), which is the Architecture-informed and Information and Communication Technologies (ICTs)-based component of a Design-to-Robotic-Production & -Operation (D2RP&O) framework that represents an alternative to existing intelligent built-environment paradigms.