Dunvegan Thought Spot


In the research The Dunvegan Group conducts to support our CCR™ (Customer Care & Retention™) programs, we discover articles, blog posts and videos which, although not directly related to our work, are thought provoking or concern matters you may want to think about.  ‘Thought Spot’ covers a broad range of subjects.

The posts in ‘Thought Spot’ are selected by Olev Wain, Ph.D., VP of Research at The Dunvegan Group. 

We welcome your feedback!


Solving The Enigma of Artificial Intelligence (AI)

As defined by technopedia.com (edited):

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:

  • Speech and image recognition
  • Learning
  • Planning
  • Problem solving

Whatis.com defines machine learning as (edited):

A type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed.

Machine learning focuses on the development of computer programs that can change when exposed to new data. 

The process of machine learning is similar to that of data mining. Both systems search through data to look for patterns.

However, instead of extracting data for human comprehension -- as is the case in data mining applications -- machine learning uses that data to detect patterns in data and adjust program actions accordingly.

Writing on theverge.com on October 10 2016, James Vincent observed that (edited):

While companies like Google are confidently pronouncing that we live in an AI age with machine learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are keen to point out that there’s still a lot of work to be done.

Just because we have digital assistants that sound like the talking computers in movies doesn’t mean we’re much closer to creating true artificial intelligence.

One problem is the lack of insight we have into how these systems work in the first place and how they reach their conclusions.

A good demonstration of this problem comes from an experiment at Virginia Tech. Researchers created what is essentially an eye tracking system which records which pixels of an image an artificial intelligence agent looks at first.

The researchers showed the artificial intelligence (AI) agent pictures of a bedroom and asked it: "What is covering the windows?"

They found that instead of looking at the windows, the AI agent looked at the floor. Then, if it found a bed, it gave the answer "there are curtains covering the windows."

This happened to be right, but only because of the limited data the network had been trained on.

Based on the pictures it had been shown, the AI agent had concluded that if it was in a bedroom there would be curtains on the windows.

So when it saw a bed, it stopped looking — it had, in its eyes, seen curtains. Logical, of course, but also daft. A lot of bedrooms don’t have curtains!

Understanding how these AI agents work is critical because otherwise decisions can be made for which no one understands the reasons.

Writing on technologyreview.com on March 14 2017, Will Knight concludes:

Explainability isn’t just important for justifying decisions. It can help prevent things from going wrong.

An image classification system that has learned to focus purely on texture for cat classification might be fooled by a furry rug. 

So offering an explanation could help researchers make their systems more robust, and help prevent those who rely on them from making mistakes.  

Your thoughts?

Image courtesy of agsandrew at FreeDigitalPhotos.net

Relevant links:





Return to list


    Leave a Reply