December 3, 2020

Cats, Algorithms & Expectations

One day my colleague – a man of great technical knowledge and a cat person – installed smart cameras at home for security. Powered by AI, these cameras would sound an alarm when any movement was detected in his house. Even though it sounded great in principle and my colleague could see what was going on in the house remotely, he ended up turning the more advanced AI functionality off. After trying to configure the system’s motion detectors to ignore the cat, he decided that sleeping well was more important than being constantly alerted to movement.

The solution offered basic movement detection functionality which fell short of real life expectations. And while potentially amazing, AI can sometimes fall short of expectations if it’s not approached with some flexibility.

We’re living in a changing
world that’s confined neither by our own expectations nor AI’s predefined
models. Events that confuse us can also confuse machines if uncertainty is
introduced at a degree that the AI doesn’t expect or
more specifically – hasn’t been trained to expect.

Be it cats or a black swan event, various factors are inputs for an artificial brain that can react to unpredictable stimuli. In its current form, AI can be “fragile”. Unexpected data can make AI systems struggle, requiring interpretation from a human operator. As people who are dealing with data centers tend to say, the system tends to have hiccups exactly when there’s nobody around…

With our increasing reliance
on technology, a helpless AI waiting for manual guidance does not add to the
reliability of the system. We often don’t know how the model will react. A neural
network, for example, is essentially a black box. There’s no way to predict or
understand its behavior – one can only test it. Some events happens extremely
rarely, say once in a hundred years. Even though such conditions can be tested
during a simulation process, in real life bugs are likely to happen.

In the case of event or emergency
that throws all parameters out of normal range, AI models may need a period of
time to learn how to act in such a situation. But this is the crux of the matter
– an emergency is a situation that’s unexpected, but which requires immediate
action. Where a human being has naturally a survival instinct, a machine doesn’t
have one – it only does the jobs that it’s trained to do.

We’re now able to create
machines of incredible computing power, but any
successful AI implementation still relies heavily on people. And currently
there doesn’t seem enough people to go round.

Read more: Why We Must Close the Digital
Skills Gap

So, what are the potential
strategies for dealing with AI’s apparent fragility?

Redundancy

To solve the problem, some level of redundancy should be introduced into the system either human or AI – to take over in an emergency scenario.

Human: the system could stop and
fallback to a manual operator every time the situation is uncertain. In this
case, the cat owner would receive a notification – not an alarm – to check the
situation, rather than the AI implementing an incorrect course of action. It
add a second layer of defense when it comes to unexpected situations, while
automating responses most of the time. The disadvantage of such an approach is
that the response time in an emergency significantly increases, and wouldn’t be
suitable for critical systems such as in healthcare, industrial automation, or
driverless vehicles.

AI: we could deploy several
models with border conditions that trigger each algorithm when appropriate. In
effect, this works as a built-in AI redundancy to reflect the “best algorithm”,
reflecting the case that each one has areas that it’s optimized for and other where
it isn’t optimal. The solution might have several AI algorithms working in
parallel, where some work better during normal operations and others are
optimized for emergencies. Such an approach could potentially allow for seamless
transition from one algorithm to another, having the best of the best model
working each time. The drawback is that it increases complexity to the solution,
which may lead to increasing costs and maintenance requirements.

Collaborative AI

The system may work at its
best when communicating with other similar systems. This is a collaborative
approach similar to the one we see in cybersecurity. Teams feed insights back
into the system so that all network elements can stay up to date as new threats
emerge. There are clear benefits of such deployment as each installation can
tap potentially a far greater pool of wisdom. That said, privacy concerns and a
lack of frameworks for sharing such information may hinder its adoption.

Our ultimate goal of course would
be try to create a ubiquitous model that includes all possible conditions, but
until then – like AI – we can only keep learning.  

Further Reading:

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *