Technology, Human Experience, and Love: “Mom, what is that?”
As a child who greeted every mundane noun in life with a
similar question, I asked my birth mother "What is that?" I pointed to the
blinding light in the sky. She smiled down upon me and connected her own body
of knowledge and wisdom with her internal sense of my own context. "That
is called the Sun. Spelled S. U. N. It is a star and it gives light and energy
to our planet. Our eyes will hurt if we look directly at it, so I will show you
special pictures taken of the sun so you can look at it without getting hurt.
It is far bigger than we are."
While my mother lacked mathematically precise data, her answer,
and many answers like it in this precious dialogue from my childhood allowed me
to create my own working knowledge set about this world. This data store would
help me gain my bearings in the world. Ultimately, with persistence, her
answers would lead me toward some semblance of being an information-rich human
being.
A few short years after learning about the Sun, I had the
great fortune of getting a home computer. I found that when I typed in a
question, the answer immediately returned on the display was this:
"?SYNTAX ERROR" The computer did not comprehend my request. It failed
to understand me on any level. To get anything useful, I came to realize that I
had to learn how to program and code software. Through library books and
computer magazines, I learned what was required and ended up constructing my
own software. I eventually created text-based adventure games similar to
classics like Infocom's Zork. Type in "LOOK" and the computer would
display text such as "You are standing in a lush green forest. To the
north you see a giant shiny stone castle." While this output was
imaginative, the computer was still more or less not telling me anything that I
didn't expect it to tell me. This was a disappointment because my first “search
engine" in my life, my mother, had become quite ill with cancer and often
was in the hospital. I needed more answers to more questions. As it happened,
it wasn't long before I found that I had to find those answers completely on my
own.
As a young adult I made computer games and software that
pretended to be smart and responsive. As a budding professional game developer
I had entered into a career making software that faked having conversations
with the human user. This “new media interactive entertainment experience” was
on some level just smoke and mirrors. Nothing yet compared with the memories of
my mother who understood not only what I was asking but also inherently
understood what I needed to know. Since those days, software and hardware have
continued to follow the path well outlined by futurist Ray Kurzweil who
described desktop computers that would be able to express human-level
intelligence. We are on that road, but I think it's one that sits on the side
of a giant cliff.
As Google and other companies make and spends billions to
reach this level of human-computer interaction, we inch closer to the kind of
exchange so naturally found between a child and parent. Computers will become
teachers that will understand not simply what is being asked but also what the
user really needs. Some primitive contextual guesswork like this is already a
part of the experience now. Type in the latest blockbuster movie title and
you'll be able to buy a ticket to your local theater in just one click. Ask
Siri, Alexa, Cortana, Google Home, any number of super smart devices with your
voice, and you'll get a reply that often is helpful or useful. Once the news,
weather, traffic and kitchen math conversions are done, despite clever
engineering, it still doesn’t really understand you and better comprehend what
you NEED to know. For example, ask your favorite map program how long does it
take to drive to St. Louis, and it will gladly tell you all the turns and
timings without first blasting a dire warning that a massive storm is hitting
the middle of the state in a few hours. It will be treacherous. That's what you
really needed to know much more than just the sequence of turns.
Like my map and directions illustration, everything
presented in your social media feed is guided by algorithms. Without any effort
on your behalf, it has been customized and filtered for maximum impact. This is
prime real estate for tech companies and content creators alike. Compare that
to an exchange you might have in a house party. If you are simply interacting
with people, fringe nonsense doesn’t really come into play. There are pop-up
filters built into most human beings. Nobody repeatedly invites the paranoid
conspiracy freak to be front and center at the regular gatherings. They are
left off of the invite list. There were no such rules governing our favorite
screen spaces this past year prior to the election.
My mother, or at least her spirit, in replying to questions
about the election, never would have offered a fake news story as part of her
answer. Even if she was aware of these suburban myths, she never would have
thrown bad information into the mix without calling them out for what they are.
Having been tricked by a number of fake news links myself, I see that proper
curating of information presentation is perhaps the ultimate failure in
consumer-facing artificial intelligence systems (if it in fact in even minor
ways altered the outcome of the election and by extension the course of
American history). It’s a particularly significant problem because in practice,
prior to the modern era, identifying truth and fact has been core to human
intelligence as a matter of natural selection and social behavior. So far this
intrinsic aspect of human exchange is not a requirement for screen space feeds.
In fact, as we all have seen, screen-space companies all benefited from more
crazy rather than less crazy. If you throw crap at a fan and somebody
investigates a splatter, job well done! Cha-ching. Link clicked. Advertiser
charged.
It is possible, looking to the future, to put more emphasis
on highlighting what is identifiable as fake information. Advanced artificial
intelligence systems can go to further effort to validate the truthiness of any
given tidbit of proposed screen space filler. Post-election, some tech
companies have already stated that they were going to roll out solutions to
this problem. But here’s a catch: some argue that humans are inclined to
believe things based on “feeling” more than actual evidence; no such technology
will diminish offerings for example of the propaganda which at times became
part of the new POTUS Twitter feed. I think there is a way forward even with
that being a persistent issue.
A vision I personally advocate is this. In the future,
another child losing a mother to cancer will turn to a computer to get answers
to life’s questions. The computer will know all of the values and context, and
understand that a strong sense of fact-based learning and information is to be
presented rather than plastic window dressing fantasies popular with conspiracy
gurus. If computers become our teachers and mentors, and seeing how much we
rely on them as intermediaries in our relationships and daily living, why not
choose now to aim with gusto toward the goal of spin-free search results and
feeds? Look more closely at what replies my mother gave me as a child. The
intention of her responses was guided by something that is the most difficult
thing to try to encapsulate in some programmatic soup of code and engineered
smarts. What she gave me in her replies was something we would immediately
embrace I think in looking at this symbiosis we’ve adopted. The answer is this:
Love.
Search results and indeed future artificial intelligence
systems must learn how to consider the human heart. It must mimic it, and
strive to reach its potential. This means that future systems must engage our
screen space with not only answers but with questions of its own that
discourages the notion of an idle and undisciplined mind. It must guide its
response to encourage our creativity, strengthen our identity as humans in a
global society, and also encourage all of the positive aspects of good human
communication and community. We must challenge ourselves to fold these
strategies into the algorithms that evaluate results before they are presented
to the user. We must add Love.
Engineering directors in the technology field must reflect
on the vital exchange of information they have relied on throughout their
lives. At the top of what makes them information-rich and well-rounded human
beings were the values and examples set forth by mentors – parents, teachers,
friends, pastors, and so on. Grab those values, and adopt them, and wire them
into the code. Tech developers building a toolset which people make integral to
who they are as human beings must give users something that demonstrates by
example the qualities inherent in Love: kindness, compassion, curiosity,
empathy, gratitude, charity, generosity, and so many other things.
There is profit in such work. It’s the right thing to do.
We’ve jumped hurdles to make the system function such as it does now, but it
doesn’t work for our good.
As our computers go beyond Kurzweil’s Singularity, if
computers have seen that this was important to us, perhaps they will continue
teaching us more about ourselves rather than the far less rewarding business of
maximizing profit and emblazoning brand names on the sides of buildings
throughout the world. Now, or in the future, when I ask my computer for
information, or for help on any kind of problem, trivial or great, is it wrong
to expect to also hear the spirit of my mom, who always replied with an
expression of Love?
"That is called the Sun. Spelled S. U. N. It is a star
and it gives light and energy to our planet.”
No comments:
Post a Comment