The Human Interface
In the quest to enhance the symbiotic relationship between humans and machines, the two have become increasingly entangled. When technology develops faster than human needs, we start to adapt to the tools that we created to help us. The availability of our technologies has made us more impatient, crave convenience, and in many ways, more machine-like.
Trends come and go, but our yearning for technology to sound, behave, and look like us has never changed. We, as human beings, welcome human experiences and human connections; we want that understanding and emotional feel.
Transcending Natural Limitations
NEON was built on a foundation to make interactions with technology around us more human. Our venture started with us asking whether the interface with technology could go beyond buttons, controls, and voice commands? Can we push the boundaries of what is currently available? To do this, we needed to address the elaborate chain of logical reasoning that we use to define a real human interaction:
- Do they look real? (Reality)
- Is there lag in their movements? (Realtime)
- Do they react to what I say? (Responsive)
Together, these 3 Rs (Reality, Realtime, and Responsive) form the pillars of CORE R3. We turned to research as a rich source of inspiration to develop our algorithms, and incorporated some canonical principles: Behavioral Neural Networks, Evolutionary Generative Intelligence, and Computational Reality.
Let’s dive in and disambiguate.
Breaking Down the Code
Core R3 begins with Neural Networks, a computational simulation where an adaptive system “learns” to perform specific tasks such as recognizing patterns, classifying objects, and making predictions. Information flows through this network when patterns of data are fed via input units. Our Behavior Neural Network was extensively trained by human data, inevitably learning human gestures, movements, and facial expressions. We determined the best set of weights to maximize our model accuracy in generating different human representations. This sets the foundation for the hyper-realistic features and movements in our NEONs and lays the groundwork for further abstraction.
While neural network architectures can become extremely proficient at classifying human features and behaviors, they’re not very adept at creating them. Evolutionary Generative Intelligence addresses the lack of imagination of traditional neural networks in our CORE R3 Engine. Without any expressive or behavioral inputs, our algorithm learned to generate new, synthetic data in a recursive process. It started producing novel and original content that never happened before. We started creating new realities.
The third model we applied into CORE R3 was Computational Reality, an already mature technology consisting of computer graphics, computer vision, and imaging technology. When we combine this traditional methodology with our two other original machine-learning driven paradigms, the result was an engine that gave rise to individual NEONs.
Artificial Intelligence (AI) has become one of the most pervasive technologies that permeates several aspects of life; you would be hard pressed to escape AI in today’s world. But technology should erase boundaries between people, not create them. Many digital technologies of today cherry pick a sub-circuit function to model, compose it in simulation, and slowly reveal improvements in specific goal-oriented functions (think NLP and voice assistants). But these outputs remain robotic in nature, leading us further away from natural human behaviors. While algorithms can find the optimal solution to a highly constrained problem, these set of instructions have yet to capture and resonate with our feelings. Lines of code still do not equate to human thinking, much less to human intelligence or emotions.
But here at NEON, we are harnessing the elusive elements that comprise the very fabric of who we are. Our next AI engine, SPECTRA, when complemented with CORE R3, will enable interactions with technology in a way that has never been done. The framework will allow for true conversations and connections, and the platform will learn and build a working memory model of your very sentiment, allowing us to naturally interface with technology in the most natural way, the human way.