Mindful Neural Networks
Last night at the Creative AI Meetup we had an awesome conversation around the idea of “Mindful” neural networks.
Thich Nhat Tanh
We were hosted by our friends at Sheda over the Neighbourhood.work in Fitzroy, and we had a really nice group of people with varied and interesting backgrounds show up:
The Mindful Neural Network Group. Photo courtesy of Eike!
We did the usual Creative-AI opening where we review a bunch of recent innovations in the field in the last few weeks. You can find the projects we discussed by looking over the slides here: Creative AI Meetup - March 2019.
Most notable were the variety of “this X does not exist” websites powered by StyleGAN:
- Person: https://thispersondoesnotexist.com
- Cat: https://thiscatdoesnotexist.com
- Airbnb: https://thisrentaldoesnotexist.com
and the GPT-2 model, which produced the following gem, when prompted with a segment of when things fall apart:
Input >>>
To be fully alive, fully human, and completely awake is to be
Output >>>
completely free from judgement and morality, to have freedom in any worldview
through suffering (with anything that comes from between two extremes can
often be fine), to be absolutely free from stimuli getting in your way,
to be completely free from fear and humiliation, to be completely free
from a series of simple pleasures, to be absolutely free from your own
emotional and social dependence inherent in anything being different
from yours. [...]
We did eventually got around to discussing the mindfulness aspect of neural networks, and what ideas we thought we could transfer over.
As a group, we covered:
- Attention: Is attention enough? Should we make our networks “aware” of their own thoughts? Is that te same as “attention” as-it’s implemented presently? We discussed briefly this paper: Attention is not Explanation.
- Anxiety: We discussed anxiety as “fear of the future”, and considered that perhaps neural nets have no awareness of the future, as they only make judgements on single instance, or alternatively the interpretation that actually neural networks are incredibly anxious as during training that are entirely focused on the future, and how they can change themselves to do better.
- Presence: We wondered whether neural networks are “present” in “the moment”, or if this even makes sense.
- Consciousness: We didn’t go too in-depth on this, but we happened to have a researcher working in this area in the room, so we discussed it on an off. We did discuss our favourite reading on the topic. On free will, mine is this The Ghost in the Quantum Turing Machine, and on consciouness and thinking it is I Am a Strange Loop, Gödel, Escher, Bach, and Surfaces and Essences (the last of which I covered on my 2018 reading list).
- Intent: We talked a little bit about how it’s important to know why a given dataset was constructed, as this might lead to us understanding it’s bias.
- Paramitas/Loss Functions: We had a moderate amount of discussion around how “small” most neural networks are in their output objective. They take in significantly rich data, but force it to be absolutely 1 or absolutely 0. Maybe there could be richer loss functions we could consider, that would allow for the network to focus in different ways. A recent bit of research along these lines is this: Completment Objective Training, i.e. have it make a good prediction of what it is, but also what it is not.
Overall, I had heaps of fun, and there was some discussion of a second installment, as there is heaps to explore here, so stay tuned to the meetup and join us next time!