Dunvegan Thought Spot

In the research The Dunvegan Group conducts to support our CCR (Customer Care & Retention) programs, we discover articles, blog posts and videos which, although not directly related to our work, are thought provoking or concern matters you may want to think about.  ‘Thought Spot’ covers a broad range of subjects.

The posts in ‘Thought Spot’ are selected by Olev Wain, Ph.D., VP of The Dunvegan Group. 

We welcome your feedback!


 

 

Our Hackable World: At What Cost?

Hardly a week goes by without our reading about a serious data breach at a large corporation or government agency.

Financial institutions are also experiencing theft of funds through hacking. For example, Wikipedia reports that in February 2016 the central bank of Bangladesh had $101 million withdrawn from its account at the Federal Reserve Bank of New York and transferred to fictitious accounts around the world.

Although most of this money was not recovered, the situation could have been worse.

A $20 million transfer to Sri Lanka was blocked only because someone in one of the routing banks in the global SWIFT network for transferring funds saw a spelling error in the documentation and sounded the alarm. Otherwise this transfer would have gone through to the fictitious recipient.

You have to wonder about the vulnerability of systems for handling data and the transfer of money. Part of the explanation of how systems can be hacked is in how they are built.

The Economist Magazine on April 8 2017 explained:

Modern computer chips are typically designed by one company, manufactured by another and then mounted on circuit boards built by third parties next to other chips from yet more firms.

A further firm writes the lowest-level software necessary for the computer to function at all. The operating system that lets the machine run particular programs comes from someone else.

The programs themselves come from someone else again.

A mistake at any stage, or in the links between any two stages, can leave the entire system faulty—or vulnerable to attack.

Errors are also made in writing source code, which are the instructions that are compiled by a computer before executing a program. Even at a low error rate of one line in 1000, 1 billion lines of source code can initially have 1 million lines containing an error.

Getting each of those lines to interact properly with the rest of the program they are in, and with whatever other pieces of software and hardware that program might need to talk to, is a task that no one can get right first time.

Any of these errors, if detected, could potentially be exploited by a hacker.

According to the Cybersecurity Business Report on August 22 2016, the global cost of cybercrime is expected to reach $6 trillion annually by 2021.

The cybercrime cost prediction includes damage and destruction of data, stolen money, lost productivity, theft of intellectual property, theft of personal and financial data, embezzlement, fraud, post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data and systems, and reputational harm.

There are other costs not factored into this estimate; costs associated with the impact that fear, stress and anxiety have on those directly and indirectly affected by the crime

As an example, the Office of Personnel Management (OPM) for the US government, which manages information files for the civil service, was hacked some time before 2015.

This security breach involved over 21 million victims who had applied for government security clearances and who had undergone extensive background investigation … including names of family members, spouses and friends. All of this data was accessed by hackers.

In addition, the fingerprint files of 5.6 million federal employees were hacked … many of these employees have access to classified material and facilities and use their fingerprints as identification.

What price do we put on the fear, stress and anxiety these people experience in not knowing if or when or how this data will be used to exploit any vulnerabilities they have?

Your thoughts?

Image courtesy of matejm at FreeDigitalPhotos.net

Relevant links:

https://en.wikipedia.org/wiki/2016_Bangladesh_Bank_heist

http://www.economist.com/news/science-and-technology/21720268-consequences-pile-up-things-are-starting-improve-computer-security

https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach

http://www.csoonline.com/article/3110467/security/cybercrime-damages-expected-to-cost-the-world-6-trillion-by-2021.html

http://www.cnbc.com/2016/02/05/an-inside-look-at-whats-driving-the-hacking-economy.html

http://cybersecurityventures.com/hackerpocalypse-cybercrime-report-2016/

Read More

Re-invent Your Business – Just Like The French Foreign Legion Did, Twice In Its History

As the circumstances or the environment your business operates in change, you must think about repositioning or re-inventing your company so it remains relevant to your customers’ changing needs.

This is precisely what the French Foreign Legion (FFL) did after the First World War and again in the 1960’s after participating in an attempted coup-d’état against French President Charles de Gaulle.

But first some background . . .

Founded in 1831, the French Foreign Legion was created for foreign nationals who were willing to undertake military service on behalf of France.

Paradoxically, the Legionnaires’ loyalty has always been to the Legion and NOT to France.

The original purpose of the FFL was to militarily protect and expand the French colonial empire in the 19th century.

The popular and lingering view of the FFL is the one depicted in the 1939 movie ‘Beau Geste’ starring Gary Cooper where men in blue uniforms are marching through the desert in North Africa when not fighting the enemy.

Many of these men had apparently joined the Legion to escape the long arm of the law in their native country or because of various personal problems in their private lives.

During the First World War, the FFL fought on many fronts but by the end of the war there was serious consideration being given to disbanding the Legion because of the high casualty rate they suffered during the war . . . there were not many of them left.

For the Legion to survive, a way had to be found to encourage enlistment.

As Robert Twiggers described the situation on aeon.com (edited):

Colonel Paul-Frédéric Rollet came to their rescue. He understood that, instead of offering a sanctuary for runaway convicts, legionnaires needed a new myth of belonging and self-sacrifice.

Rollet was a military genius who understood the inner symbolism of such things as heroic defeats, odd uniforms and lost limbs. For example, Sir Adrian Carton de Wiart, one of Britain’s most decorated officers and Admiral Horatio Nelson were missing hands or arms.  

Suggestively, Paul Rollet went into battle with just a rolled umbrella. He believed that a commander showed lack of faith in his men if he needed to be armed, and besides, it distracted from his real task of inspiring his soldiers to fight.

That Rollet seized on the heroic defeat of the Legion at Camarón fighting the Mexican Army in 1863, is no accident: men brought up to accept death and mutilation as the price for never being forgotten by their uber-family (the Legion) are stronger than those bribed with the comforting notions of victory and glory.

Rollet knew that an army doesn’t march on its feet, or even its stomach. It marches on the stories it tells itself. So he made sure that the Legion was full of traditions and stories and rituals.

He also turned a few marching songs into full-blown anthems. However tough, legionnaires must learn to sing with gusto the songs of former warriors.

Other armies don’t really do this, nor do the officers bring the men breakfast once a year (on Camarón Day, of course).

This action alone mimics a family in its concern. Every Legion memoir (and they are legion), however much it complains of bullying or incompetence, mentions with heartfelt gratitude the songs and traditions imbibed alongside the forced marches.

Fast forward to 1961 . . . when the FFL’s First Paratroop Regiment participated in the failed coup-d’état to overthrow French President Charles de Gaulle. The First Para’s were disbanded in the following months.

Robert Twiggers continues:

The coup attempt brought to the surface the troubled relationship between France and its Foreign Legion. The French admire it and yet don’t quite trust it.

Another re-invention was required.

This time, the solution was truly bold: to turn the Legion into an elite force, a strike force, the kind that could easily put down a coup, or stage one in another country.

The un-disgraced 2nd Parachute Regiment [who did not participate in the attempted coup in 1961] became the ‘Young Lions’ of this newly created force.

Given our rapidly evolving business climate, is it time to re-invent your company to better serve your customers’ needs?

Your thoughts?

Image courtesy of tpsdave at All-free-download.com

Article Link:

https://aeon.co/essays/why-young-men-queue-up-to-die-in-the-french-foreign-legion#

Read More

Solving The Enigma of Artificial Intelligence (AI)

As defined by technopedia.com (edited):

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:

  • Speech and image recognition
  • Learning
  • Planning
  • Problem solving

Whatis.com defines machine learning as (edited):

A type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed.

Machine learning focuses on the development of computer programs that can change when exposed to new data. 

The process of machine learning is similar to that of data mining. Both systems search through data to look for patterns.

However, instead of extracting data for human comprehension -- as is the case in data mining applications -- machine learning uses that data to detect patterns in data and adjust program actions accordingly.

Writing on theverge.com on October 10 2016, James Vincent observed that (edited):

While companies like Google are confidently pronouncing that we live in an AI age with machine learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are keen to point out that there’s still a lot of work to be done.

Just because we have digital assistants that sound like the talking computers in movies doesn’t mean we’re much closer to creating true artificial intelligence.

One problem is the lack of insight we have into how these systems work in the first place and how they reach their conclusions.

A good demonstration of this problem comes from an experiment at Virginia Tech. Researchers created what is essentially an eye tracking system which records which pixels of an image an artificial intelligence agent looks at first.

The researchers showed the artificial intelligence (AI) agent pictures of a bedroom and asked it: "What is covering the windows?"

They found that instead of looking at the windows, the AI agent looked at the floor. Then, if it found a bed, it gave the answer "there are curtains covering the windows."

This happened to be right, but only because of the limited data the network had been trained on.

Based on the pictures it had been shown, the AI agent had concluded that if it was in a bedroom there would be curtains on the windows.

So when it saw a bed, it stopped looking — it had, in its eyes, seen curtains. Logical, of course, but also daft. A lot of bedrooms don’t have curtains!

Understanding how these AI agents work is critical because otherwise decisions can be made for which no one understands the reasons.

Writing on technologyreview.com on March 14 2017, Will Knight concludes:

Explainability isn’t just important for justifying decisions. It can help prevent things from going wrong.

An image classification system that has learned to focus purely on texture for cat classification might be fooled by a furry rug. 

So offering an explanation could help researchers make their systems more robust, and help prevent those who rely on them from making mistakes.  

Your thoughts?

Image courtesy of agsandrew at FreeDigitalPhotos.net

Relevant links:

https://www.techopedia.com/definition/190/artificial-intelligence-ai

http://whatis.techtarget.com/definition/machine-learning

http://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks

https://www.technologyreview.com/s/603795/the-us-military-wants-its-autonomous-machines-to-explain-themselves/?set=603859

Read More

Reversal Thinking & Innovation - Circular Airport Runways

Process reversal can lead to innovation.

Reversal thinking essentially involves reframing a process by thinking about it ‘backwards’.

A good example is how we print documents . . . the paper moves through the stationary printer.

Alternatively, the printer could move across a stationary sheet of paper while it prints. This is exactly what ZUtA Labs did when they developed their first mini-robotic pocket printer which is about the size of a hockey puck and twice as thick.

A second example involves what is commonly known as 3-D printing or additive manufacturing, which is the reverse of subtractive manufacturing.

An example of subtractive manufacturing is when a piece of steel has portions of it removed to create a blade for a gas turbine engine. This blade can also be created through an additive process where material is added layer-by-layer (i.e., 3-D printed).

A third example involves airport design and runway layout. When planes land, they do so on runways that have been laid out to take into account the usual direction of the winds to maximize the probability of an airplane landing into headwinds.

From time to time there are crosswinds, which if severe, can cause the airport to cease operations or require airplanes to make their approaches flying almost sideways or at an angle to the runway.

In a make-believe-world it would be ideal to make movable runways so that pilots can always make their landing approaches and take-offs directly into headwinds.

One way of accomplishing this would be to build a circular runway which is 2.2 miles in diameter. Work on this concept has been in progress for years.

Katharine Schwab wrote on fastcodesign.com on March 27 2017 (edited):

Since 2012, Henk Hesselink and his team at the National Aerospace Laboratory in the Netherlands have been working on a runway design that’s circular instead of straight.

Their so-called Endless Runway Project—funded by the European Commission’s Seventh Framework Program, proposes a circular design that would enable planes to take off in the direction most advantageous for them. Namely, the direction without any crosswinds.

As Hesselink tells Co.Design, crosswinds are exactly what they sound like: winds that buffet an airplane from the side as it lands. He was inspired to create a new kind of runway while watching “scary” landing videos online, which show crosswinds in action.

When crosswinds are light, they have no impact on taking off or landing, but when they’re too strong, runways facing perpendicular to the crosswinds have to be shut down entirely—which can seriously impact not just one airport, but the entire network. It’s something that happens frequently near the ocean.

For instance, Hesselink says that the Amsterdam airport often has to switch between runways during durations of bad conditions, and in smaller cities with fewer runways, crosswinds can grind all flights to a complete halt.

But the circular runway system that Hesselink designed, with a diameter of about 2.2 miles and circumference of about 6.9 miles, can accommodate two planes landing simultaneously even when there are bad crosswinds.

That’s because there are always two areas on the ring where the crosswinds will be aligned with the direction of takeoff. In good conditions, three planes can land and take off simultaneously.

The circular runway works almost like a high-speed racetrack or roulette wheel, Hesselink says. If the circular runway were completely flat on the ground, the centrifugal forces would be too great and push the plane off the runway.

But his design is slightly banked, meaning it’s slightly raised on its outer edges to keep the plane on the runway as it gains speed.

For now, the Endless Runway remains a concept where the only testing has been within the safe confines of computer simulation. But Hesselink hopes to test the idea in real life on a racetrack with a drone.

Your thoughts?

Image: Netherlands Aerospace Centre

Relevant links:

http://www.zutalabs.com/

https://www.creativemechanisms.com/blog/additive-manufacturing-vs-subtractive-manufacturing

https://www.fastcodesign.com/90107235/why-airport-runways-should-actually-be-circular

Read More

Should You Bring Artificial Intelligence Into Your Business?

Artificial Intelligence (AI) holds great potential for most businesses since it can be used to automate many mental tasks taking less than one second of thought. Image recognition is a good example of such a task.

Such automation can be done either today or in the very near future, according to Andrew Ng who is head of global Artificial Intelligence strategy at the Chinese search company Baidu.

Ng draws an analogy between the rise of Artificial Intelligence and the introduction of electricity. Writing in Harvard Business Review in November 2016 he observed:

A hundred years ago electricity transformed countless industries; 20 years ago the internet did, too. Artificial intelligence is about to do the same.

To take advantage, companies need to understand what artificial intelligence can do and how it relates to their strategies. But how should you organize your leadership team to best prepare for this coming disruption?

A hundred years ago, electricity was really complicated. You had to choose between AC and DC power, different voltages, different levels of reliability, pricing, and so on.

And it was hard to figure out how to use electricity: Should you focus on building electric lights? Or replace your gas turbine with an electric motor?

Thus many companies hired a VP of Electricity to help them organize their efforts and make sure each function within the company was considering electricity for its own purposes or its products. As electricity matured, the role went away.

Recently, with the evolution of IT and the internet, we saw the rise of CIOs to help companies organize their information. As IT matures, it is increasingly becoming the CEO’s role to develop their companies’ internet strategy.

Indeed, many S&P 500 companies wish they had developed their internet strategy earlier. Those that did now have an advantage. Five years from now, we will be saying the same about AI strategy.

Ng recommends hiring a Chief AI Officer (CAIO) so that Artificial Intelligence gets applied across all divisions of your company. A CIAO should have the following skills:

Good technical understanding of AI and data infrastructure. In the AI era, data infrastructure — how you organize your company’s databases and make sure all the relevant data is stored securely and accessibly — is important.

Ability to work cross-functionally. AI itself is not a product or a business. Rather, it is a foundational technology that can help existing lines of business and create new products or lines of business.

Strong intrapreneurial skills. AI creates opportunities to build new products, from self-driving cars to speakers you can talk to, that just a few years ago would not have been economical.

A leader who can manage intrapreneural initiatives will increase your odds of successfully creating such innovations for your industry.

Ability to attract and retain AI talent. This talent is highly sought after. Among new college graduates, I see a clear difference in the salaries of students who specialized in AI.

A good Chief AI Officer needs to know how to retain talent, for instance by emphasizing interesting projects and offering team members the chance to continue to build their skill set.

Your thoughts?

Image courtesy of NicoEINino at FreeDigitalPhotos.net

Relevant links:

https://hbr.org/2016/11/hiring-your-first-chief-ai-officer

https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now

Read More

Exponential Technologies – What Are They?

Peter Diamandis is an American engineer, physician and entrepreneur who co-founded Singularity University, a Silicon Valley think tank providing educational Artificial Intelligenceprograms as well as running a business incubator. 

The university focuses on scientific progress and the development of ‘exponential’ technologies such as artificial intelligence, robotics and virtual reality.

The incubator encourages application of these technologies in various fields such as data science, digital biology, medicine and self-driving vehicles.

In his primer on exponential technologies, Peter Diamandis writes (edited):

For a technology to be ‘exponential, its power and/or speed doubles each year, and/or the cost drops by half.

They are technologies which are rapidly accelerating and shaping major industries and all aspects of our lives.

Diamandis constructed a framework for summarizing the characteristics of exponential technologies. These characteristics are interrelated.

He calls these characteristics the 6 D’s. Here is a summary (edited) and explanation as presented by Vanessa Bates Ramirez writing on SingularityHub.com on November 22 2016:

1. Digitized – it can be programmed

“Anything digitized enters the same exponential growth we see in computing.

Digital information is easy to access, share and distribute. It can be spread at the speed of the internet.

Once something can be represented in ones and zeros – from music to biotechnology – it becomes an information based technology and enters exponential growth.”

2. Deceptive – it is initially slow in developing

“When something starts being digitized, its initial period of growth is deceptive because exponential trends do not seem to grow very fast.

Doubling .01 only gets you .02, then .04, and so on. Exponential growth really takes off after it breaks the whole number barrier.

Then 2 quickly becomes 32, which becomes 32,000 before you know it.”

As an example, artificial intelligence had its origins in research conducted during the Second World War (1939 to 1945) but did not demonstrate its true potential until more than 50 years later in 1997 when IBM’s supercomputer ‘Deep Blue’ defeated world-champion chess player Garry Kasparov.

3. Disruptive – it is more effective and cheaper than what it replaces

“The existing product for a market or service is disrupted by the new market the exponential technology creates because digital technologies outperform in effectiveness and cost.

Once you can stream music on your phone, why buy CDs?

If you can also snap, store and share photographs, why buy a camera and film?”

4. Dematerialized – take something that is physical and re-create it digitally.

“Separate physical products are removed from the equation.

Technologies that were once bulky or expensive – radio, camera, GPS, video, phones, maps – are all now in a smart phone that fits in your pocket.”

As an example, the Sony Walkman, a portable cassette tape player introduced in 1979, allowed people to carry their music with them. Now the same end is accomplished via the iPhone and digitized music.

5. Demonetized – becoming cheaper

“Money is increasingly removed from the equation as the technology becomes cheaper, often to the point of being free.

Software is less expensive to produce than hardware and copies are virtually free.

You can now download any number of apps on your phone to access terabytes of information and enjoy a multitude of services at costs approaching zero.”

6. Democratized – available to everyone, not just the wealthy

“Once something is digitized, more people have access to it. Powerful technologies are no longer only for governments, large organizations or the wealthy.”

If you can buy a cheap phone with an internet connection, you have the same communications capabilities and access to the same platforms as a billionaire.

I think the 6 characteristic D’s of exponential technologies can be summarized even further as:

1. A digitized form of a previous technology
2. Accelerating development of improvements following a slow start
3. More effective and cheaper than what it is displacing and therefore available to everyone

Your thoughts?

Image courtesy of akindo at FreeDigitalPhotos.net

Relevant links:

https://su.org/concepts/

https://singularityhub.com/wp-content/uploads/2016/11/6Ds-Infographic-v2-2.jpg

http://www.bbc.co.uk/timelines/zq376fr#zw376fr

Read More

Meet ‘Flippy’ – CaliBurger’s Robot Hamburger Cook

CaliBurger is a California-based hamburger restaurant chain similar to Five Guys, In-N-Out and Shake Shack. It positions itself as a tech company that also sells hamburgers.

While in the restaurant, customers can play games such as GemJump and Minecraft and see the results of interactive in-house gaming amongst its customers displayed on a huge video wall.

Currently, CaliBurger has restaurants in 13 countries including China, Saudi Arabia, Taiwan and Sweden.

Automation of some jobs is the next step for CaliBurger. Line cooks are the target.

Writing on singularityhub.com, Vanessa Bates Ramirez provides details (edited version):

CaliBurger has partnered with a company called Miso Robotics and developed ‘Flippy’, a robotic kitchen assistant, and recently installed one in their Pasadena, California location.

Flippy the bot is more than just an assembly line robot requiring an organized work space with ingredients being precisely positioned for it to cook hamburgers.

Flippy incorporates the latest machine learning and artificial intelligence software to locate and identify all things that are in its workspace and to learn from its experience through a constant feedback loop.

The bot consists of a cart on wheels with a single six-axis arm providing full range of motion allowing it to perform multiple functions.

It has an assortment of tools such as spatulas, scrapers and tongs which it can change by itself, depending on the task.

Some of the bot’s key tasks include pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.

Sensors on the grill-facing side of the bot take in thermal and 3D data, and multiple cameras help Flippy ‘see’ its surroundings. The bot knows how many burgers it should be cooking at any given time through a system that digitally sends tickets back to the kitchen from the restaurant’s counter.

Nevertheless, a human is required to finish the burger. Flippy alerts human cooks when it’s time to put cheese on a grilling patty. A human is also needed to add sauce and toppings once the patty is cooked, as well as wrap the burgers that are ready to eat.

Two of the bot’s most appealing features for restaurateurs are its compactness and adaptability—it can be installed in front of or next to any standard grill or fryer, which means restaurants can start using Flippy without having to expand or reconfigure their kitchens.

Because this bot ‘machine learns’, it can also learn to prepare other foods on the menu.

According to the Bureau of Labour Statistics, there were 2.3 million chefs in 2014 in the United States; line cooks are included in this figure.

Flippy takes care of jobs around the grill that are repetitive and dangerous due to the possibility of cuts or burns.

I believe many line cooks operating in a repetitive-task environment can and will be replaced by automation. Bots like Flippy are more reliable than humans, can work longer shifts, provide a uniform product and never call in sick. Nor are there any personnel issues.

The argument has been made that destruction of one job will lead to the creation of another job; in the case of robots like Flippy, new tech jobs will certainly be created to manufacture and maintain these devices.

These new jobs require higher levels of technical expertise, things that line cooks cannot be easily re-trained to do.

The prospects for people losing jobs through automation, are not good, particularly for those whose entire skill set has been replaced by an ‘intelligent’ bot.

Your thoughts?

Image courtesy of chiarito at FreeDigitalPhotos.net

Relevant links:

https://singularityhub.com/2017/03/08/new-burger-robot-will-take-command-of-the-grill-in-50-fast-food-restaurants/?utm_source=Singularity+Hub+Newsletter&utm_campaign=8aef49b2f5-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-8aef49b2f5-58188205

http://canadianrestaurantnews.com/canada/latest-news/caliburger-combines-burgers-with-interactive-gaming

https://www.bls.gov/ooh/food-preparation-and-serving/cooks.htm

http://www.geekwire.com/2015/inside-caliburger-new-in-n-out-like-burger-shop-lets-people-play-minecraft-against-each-other/

https://caliburger.com/

Read More

The Importance of Recess

The Centers for Disease Control and Prevention defines recess as “regularly scheduled periods within the elementary school day for unstructured physical activity and play.”

In elementary school, my academic day started at 9:00 AM and ended at either 3:30 or 4:00 PM, depending on whether you had misbehaved and had to stay until 4:00 as punishment.

There was a 15 minute recess in the morning as well as in the afternoon; lunch time was 90 minutes long from noon until 1:30 PM with almost all students walking or bicycling home for lunch. Most of us were back in the school yard by 1:00 PM, playing whatever games we wanted.

I liked school and was interested in all subjects; however, towards the end of each classroom study segment I looked forward to either recess or going home for lunch and then doing something involving physical activity.

It seems that these days less and less time is being allocated by elementary schools to unstructured free time with more time being allotted to academic pursuits.

My personal experience suggests that this might not be the best way to run an elementary school to achieve optimal learning conditions.

Writing in theatlantic.com in December 2016, Alia Wong explains (edited version):

In Florida, a coalition of parents known as “the recess moms” has been fighting to pass legislation guaranteeing the state’s elementary-school students at least 20 minutes of daily free play. Similar legislation recently passed in New Jersey, only to be vetoed by the governor, who deemed it “stupid.”

When, you might ask, did recess become such a radical proposal? In a survey of school-district administrators, roughly a third said their districts had reduced outdoor play in the early 2000s.

Likely culprits include concerns about bullying and the No Child Left Behind Act, whose time-consuming requirements resulted in cuts to play.

The benefits of recess might seem obvious—time to run around helps kids stay fit. But a large body of research suggests that it also boosts cognition.

Many studies have found that regular exercise improves mental function and academic performance.

And an analysis of studies that focused specifically on recess found positive associations between physical activity and the ability to concentrate in class.

Preliminary results from an ongoing study in Texas suggest that elementary-school children who are given four 15-minute recesses a day are significantly more empathetic toward their peers than are kids who don’t get recess.

Perhaps most important, recess allows children to design their own games, to test their abilities, to role-play, and to mediate their own conflicts—activities that are key to developing social skills and navigating complicated situations.

I agree, especially with Alia Wong’s last comment.

As an elementary school pupil I remember playing pick-up soccer and baseball during recess and lunch time.

Without any adult supervision, we settled disputes amongst ourselves and renegotiated rules as required.

We were masters of both our physical space and our relationships with one another, if only for a short time.

Fast forward to today . . .

Some elementary schools have set up stationary bicycles with desks attached to the handle bars. Students may use them to do their work when they feel they are not able to sit still and concentrate on their academic studies. It seems that simultaneous physical activity helps them focus on their work.

The results, so far, appear to be promising. However, as with most new innovations or practices, only time will tell if studying while peddling a stationary bicycle consistently aids learning.

Or are stationary bicycles just a passing fad that students and teachers want to believe helps with cognitive functioning?

Your thoughts?

Image courtesy of dolgachov at FreeDigitalPhotos.net

Relevant links:

http://pediatrics.aappublications.org/content/131/1/183

https://www.theatlantic.com/magazine/archive/2016/12/why-kids-need-recess/505850/

http://globalnews.ca/news/3187541/saskatoon-teachers-use-stationary-bikes-to-help-students-concentrate/

Read More

The Appeal of The Physical in Our Digital Age

The digital age we live in has changed the way we listen to music, capture or record images, and read. As new digital technologies developed, old technologies were swept aside with many believing it was only a matter of time before they ceased to be used, being remembered only as museum curiosities.

This has not happened.

In fact, vinyl records and film photography are experiencing a renaissance, and paper books are still being sold and read.

Christian Jarrett in his article on “The psychology of stuff and things” in The Psychologist magazine explains:

More than mere tools, luxuries or junk, our possessions become extensions of the self. We use them to signal to ourselves, and others, who we want to be and where we want to belong. And long after we’re gone, they become our legacy. Some might even say our essence lives on in what once we made or owned.

I doubt if many people, upon inheriting old digital files, would view them as a legacy item by which to remember someone.

Digital images, words, and sounds, which have no physical manifestation, which can be instantly uploaded or deleted, may not be considered “real”.

Perhaps people are now seeking “real” things in their physical world to complement their digital world . . . something they can touch, see and smell.

Think about vinyl records.

In 2006, only 900,000 new vinyl records were sold in the United States; in 2015 new record sales had increased to 12 million which is an increase of more than 30% per year. And sales are not just to older people who used vinyl records in their youth and might now be making vinyl purchases for nostalgic reasons; young digital natives who never experienced vinyl records are also buying them.

As David Sax explains in his new book “The Revenge of Analogue”

Records are large and heavy; require money, effort, and taste to create and buy and play; and cry out to be thumbed over and examined. Because consumers spend money to acquire them, they gain a genuine sense of ownership over the music, which translates into pride.

Film photography is also on the upswing across all age groups, producing a physical record (i.e., a negative) that can be printed on paper or scanned into a digital file . . . the point being that you have a physical manifestation of the image you captured, which, if stored properly, will be usable for a very long time . . . perhaps more than a hundred years under ideal conditions.

What happens to JPEG files as storage technology evolves and the ability to open these files is no longer available. Should you have any, how many of you have the ability to retrieve data from your old 8” floppy disks. (Hint: you can still find these drives on eBay starting at about $250 . . . but what about the software that was used to write the files originally? Not so easy, is it?)

And finally, about reading and the paper book. Pew Research reports that two-thirds of Americans read a paper print book in 2016 . . . about the same figure as in the preceding four years. Only about a quarter read an e-book in the same period.

I believe we are not ready to entirely abandon our physical media and totally embrace a digital world. While the digital world is here to stay, there will still be a market for outdated technologies.

Your thoughts?

Image courtesy of dan at FreeDigitalPhotos.net

Relevant links:

https://thepsychologist.bps.org.uk/volume-26/edition-8/psychology-stuff-and-things

https://www.amazon.ca/Revenge-Analog-Real-Things-Matter/dp/1610395719

http://www.pewinternet.org/2016/09/01/book-reading-2016/

Read More

Kodak Invented The Technology That Destroyed It

Many people continue to believe that Kodak sat by idly as the digital camera destroyed its film business. This was not the case.

Kodak was very active in the research and development of digital imaging technology.

Writing in the July 2016 issue of Harvard Business Review, Scott Anthony points out that:

“The first prototype of a digital camera was created in 1975 by Steve Sasson, an engineer working for … Kodak. The camera was as big as a toaster, took 20 seconds to take an image, had low quality, and required complicated connections to a television to view, but it clearly had massive disruptive potential.”

David Usborne writing on independent.co.uk observed that:

“A vice-president left [Kodak] in 1993 because even then he couldn't persuade it to manufacture and market a digital camera. ‘We developed the world's first consumer digital camera but we could not get approval to launch or sell it because of fear of the effects on the film market."

To a degree this was understandable since the profit on film was 70 cents on the dollar; such margins could not be achieved with digital cameras.

Then, in 1994 Apple launched ‘Quicktake’ one of the first digital consumer cameras. Apple did not manufacture it . . . Kodak did!

Meanwhile, Kodak continued to design and manufacture high end digital cameras and other imaging equipment, not realizing the mass market potential for consumer digital cameras.

According to Wikipedia (edited):

In 1999 Kodak had a 27% market-leading share in digital camera sales.

In 2001 Kodak held the No. 2 spot in U.S. digital camera sales (behind Sony)  but it lost $60 on every camera sold.

By 2010 it held 7% share, in seventh place behind Canon, Sony, Nikon and others.

Despite the high growth, Kodak failed to anticipate how fast digital cameras became commodities, with low profit margins, as more companies entered the market in the mid-2000s.

Kodak’s digital cameras soon became undercut by Asian competitors that could produce their offerings more cheaply.

Now an ever-smaller percentage of digital pictures are being taken on dedicated digital cameras, being gradually displaced in the late 2000s by cameras on cellphones, smartphones and tablets. 

So you see, Kodak was not blind to the digital revolution but actually participated in it. Trying to maintain its film business prevented the company from a more aggressive move into the consumer digital camera arena.

Your thoughts?

Image courtesy of bpablo at FreeDigitalPhotos.net

Relevant links:

https://hbr.org/2016/07/kodaks-downfall-wasnt-about-technology

http://www.independent.co.uk/news/business/analysis-and-features/the-moment-it-all-went-wrong-for-kodak-6292212.html

https://en.wikipedia.org/wiki/Kodak#Shift_to_digital

http://sloanreview.mit.edu/article/the-real-lessons-from-kodaks-decline/

Read More

Smart Phone Addiction – Going “Cold Turkey”

Writing on theguardian.com on February 11 2016, Jenna Woginrich describes life after getting rid of her mobile communication device 18 months ago. Here is an edited excerpt:

The phone rings: it’s my friend checking to see if I can pick her up on the way to a dinner party. I ask her where she is and as she explains, I reach as far as I can across the countertop for a pen.

I scribble the address in my trusty notebook I keep in my back pocket. I tell her I’ll be at her place in about 20 minutes. Then I hang up. Literally.

I take the handset receiver away from my ear and hang it on the weight-triggered click switch that cuts off my landline’s dial tone.

I take my laptop, Google the address, add better directions to my notes and head outside and drive over. If I get lost on the way, I’ll need to ask someone for directions. If she changes her plans, she won’t be able to tell me or cancel at a moment’s notice. If I crash on the way, I won’t be calling 911.

I’m fine with all of this. As you guessed by now, I haven’t had a cellphone for more than 18 months.

I didn’t just cancel cellular service and keep the smartphone for Wi-Fi fun, nor did I downgrade to a flip phone to “simplify”; I opted out entirely. There is no mobile phone in my life, in any form, at all.

Arguably, there should be. I’m a freelance writer and graphic designer with many reasons to have a little computer in my holster, but I don’t miss it. There are a dozen ways to contact me between email and social media. When I check in, it’s on my terms.

“My phone” has become “the phone”. It’s no longer my personal assistant; it has reverted to being a piece of furniture – like “the fridge” or “the couch”, two other items you wouldn’t carry around with you.

I didn’t get rid of it for some hipster-inspired luddite ideal or because I couldn’t afford it. I cut myself off because my life is better without a cellphone.

I’m less distracted and less accessible, two things I didn’t realize were far more important than instantly knowing how many movies Kevin Kline’s been in since 2010 at a moment’s notice. I can’t be bothered unless I choose to be. It makes a woman feel rich.

When friends found out, I was told it was as insane a decision as leaving a rent-controlled apartment.

But I was tired of my world existing through a black screen and even more tired of being contacted whenever anyone (or any bot) felt like it.

I was constantly checking emails and social media, or playing games. When I found out I could download audiobooks, the earbuds never left my lobes. I was a hard user. I loved every second of it.

I even slept with my phone by my side. It was what I fell asleep watching, and it was the alarm that woke me up. It was never turned off.

It got so bad that I grew uncomfortable with any 30-second span of hands-free idleness. I felt obligated to reply to every Facebook comment, text, tweet, and game request.

As an author I wrote it all off as reader interaction, free publicity and important grassroots marketing. These were the justifications of a junkie; I was an addict at risk of losing myself completely

I made the decision to break up with my device and I did it “cold turkey”.

I’ve been clean a year and a half now, and I’m doing fine. I get plenty of work, I don’t miss invitations, and I’m no longer scared of my own thoughts.

I got a landline and I got more sleep. I look people in the eye. I eat food instead of photographing it. My business, social life, and personal safety have not evaporated overnight either.

Turns out a basic internet connection and laptop is plenty of connectivity to keep friends informed, weekends fun and trains running on time. And while I might be missing out on being able to call 911 at any moment, it’s worth the sacrifice to me.

I’m glad to be back in the world again. It beats waiting for the notification alert telling me that I exist.

Your thoughts?

Image courtesy of Georgijevic at FreeDigitalPhotos.net

Relevant links:  

https://www.theguardian.com/technology/2016/feb/11/smartphone-technology-addiction-facebook-twitter

https://well.blogs.nytimes.com/2015/07/06/screen-addiction-is-taking-a-toll-on-children/?_r=0

Read More

Reasons for Sending Handwritten Notes and Letters

With the domination of the Internet and social media as a communication medium, the art of handwritten letters and notes delivered by snail-mail, seems to have taken a back seat to instantaneous electronic communications.

Whenever I open my mailbox and I see an envelope that has been addressed by hand, I am more likely to open it first. Usually it contains a personal communication from a friend or relative.

Can this approach be taken for more effective business communications?

The answer is “yes”.

Writing on americanexpress.com, Carla Turchetti made the following points in support of the handwritten note or card:

Use handwritten notes to reach out to prospective clients and to say thank you to vendors and clients. Email is too easy to ignore. Phone calls can be invasive and are more challenging to schedule. Letters are hard to ignore and not invasive.

Taking the time to write something by hand makes the recipient feel special.

Handwritten notes can be more convincing and powerful than the actual message.

Handwritten notes remind us to slow down and take note … of our surroundings, our customers, and our community and clients

On a personal level as well there are good reasons for sending handwritten notes and letters.

Writing on huffingtonpost.com on May 15 2015, Traci Bild provided several reasons.

“1. A Lifetime Keepsake: Personal handwritten notes grow rarer by the day. According to the U.S. Postal Service’s annual survey, the average home only received a personal letter once every seven weeks in 2010, down from once every two weeks in 1987. In a world where people seem to have everything, words on paper, sealed with a stamp, can be far more valuable than any material item purchased.

2. Your Heart on Paper: In a wired world — where emails, tweets and text messages are more accessible than handwritten notes — there is something magical about reading words written in longhand.

3. The Ultimate Surprise: Let’s be honest: How do you feel when someone handwrites you a note? Imagine the person you write walking to their mailbox, opening it and finding a letter inscribed to them from you. It will be the best part of their day!

4. A Feeling of Importance: What people want more than anything is to feel validated and to know they matter. Your handwritten letter will send a clear message: You are important and you do matter to me.

5. It’s Fun! Purchase beautiful stationary that reflects your personality, buy interesting stamps and try out a sealing wax stamp to secure the envelope. I have a butterfly and a heart and it’s like putting a cherry on top!

6. No Regrets: How many times have you missed the opportunity to say what needed to be said, only to find it was too late? Make a point of letting people you care about, who have influenced and shaped your life know how you feel.”

And one final point. You should keep the handwritten cards and letters you receive.

I have letters my parents wrote to each other during World War II when they were separated for over four years. Reading these letters today allows me to reconnect with them . . . they passed away over 20 years ago.

Your thoughts?

Image courtesy of Eerik at FreeDigitalPhotos.net

Relevant links:

https://www.americanexpress.com/us/small-business/openforum/articles/the-forgotten-power-of-handwritten-notes/

http://www.huffingtonpost.com/traci-bild/5-reasons-to-write-a-handwritten-letter-now-not-later_b_7284236.html

Read More

Office Lunch Indian Style – A Study in Supply Chain Management

I have a lingering visual memory of my visit to Bombay (now Mumbai) India.

Just after 11:00 am on a weekday morning, hundreds of bicycles and carts emerged from the commuter railway station carrying wooden trays filled with canisters and quickly disappeared into the downtown business district.

Each canister (also known as a tiffin or dabba) contained a hot meal prepared by an office worker’s wife or mother, at home that morning, to be delivered to his place of work. The logistics of this delivery system are simply mindboggling . . . and it operates with neither an electronic trail nor paper trail!

The delivery person is known as a dabbawalla – someone who delivers dabbas – loosely translated as “lunch box man”.

The dabbawallas  are a cooperative network of more than 5,000, largely illiterate, rural workers who use the metropolitan train system to bring dabbas (the food canisters) from home to the office in time for lunch and then return the empty dabbas home at the end of the day through the same network.

Dabbawallas use nothing more than 3-4 symbols painted on the dabbas to create an unparalleled food supply chain that’s famous for its incredible punctuality and reliability.

They don’t use any technical devices to support their service. Bicycles, carts, and trains are used to transport and deliver the collected dabbas.

Writing on popupcity.net in 2010, Joop de Boer explained how (edited version) the system works:

The first dabbawalla picks up the dabba from a home and takes it to the nearest metropolitan commuter railway station.

The second dabbawalla sorts out the dabbas at the railway station according to destination and puts them in the luggage carriage.

The third one travels with the dabbas to the railway stations nearest to the destinations.

The fourth one picks up dabbas from the railway station and delivers them to each individual’s office.

The process is reversed in the evenings with each dabba completing a distance of 60-70 kilometers and changing hands eight times.

Customers pay $5 to $9 a month for this service, which also explains why Western cities hardly know these kinds of services . . . it would be too expensive.

The system is a cooperative, which means that all the workers collectively own the business, are paid equally and share equally in the profit.

Every work day the dabbawallas pick up and deliver 200,000 lunch boxes within only a couple of hours, in a traffic-congested city whose population is more than 20 million.

It has been estimated that the dabbawalla’s on-time service delivery standard was 99.99998% . . . which exceeds Six Sigma standards! That is one late/missed delivery for every 6 million deliveries!

It has been said the Dabbawallas are the envy of Fedex.

The dabbawallas have started using internet technology to build their customer base. They are now carrying mobile phones. Their monthly delivery fees have increased and business is growing.

They have a long tradition in India, but how long will the dabbawallas last as an occupation?

There is a trend to eat out more often and use take-away food vendors at lunch time. And, women in younger generations are more likely to be working themselves so they are not at home to make mid-day meals for their husbands.

But, the strict dietary requirements of the various different religious groups in India make home made meals a necessity for many workers.

As in all countries, dining out is expensive, and the trains will continue to be overcrowded making it difficult for workers to carry their lunch dabbas in the passenger cabin.

I think it will be some time before the dabbawalla disappears.

Your thoughts?

Photo Courtesy of Wikipedia Creative Commons

Relevant links.

http://popupcity.net/dabbawalla-hot-lunch-delivery-by-mumbais-fastest/

https://hbr.org/2012/11/mumbais-models-of-service-excellence

https://www.youtube.com/watch?v=fTkGDXRnR9I

https://www.pri.org/stories/2014-07-15/indian-meal-service-so-efficient-it-s-envy-fedex

https://www.ft.com/content/f3b3cbca-362c-11e5-b05b-b01debd57852

https://phys.org/news/2006-06-bombay-dabbawalas-high-tech.html

Read More

Will Robots Take Your Job? Not Entirely . . . For The Time Being!

Much has been written recently about the impact of automation, robotics and artificial intelligence on the workplace and various occupations.

Robots can perform repetitive tasks more quickly and more accurately than humans and remain with a task for long periods of time. And they don’t call in sick!

Although you may not realize it, automation has been a reality going back to our parent’s time as well as our grandparents. Just think of labour saving devices such as circular saws and power drills which came into use with the introduction of electricity in our homes. The electric circular saw allows a person to make 10 cuts in the time it would take to make one cut using a manual saw . . . the carpenter simply became more efficient yet still needed all his other skills to complete the project he was working on.

Automation is not an all-or-nothing proposition.

In a study published in January 2017 by McKinsey & Company titled “Harnessing automation for a future that works”, several points were made about the potential impact of automation on you and your job.

The study’s key point however, is that:

“Automation . . . won’t arrive overnight [and that its] full potential requires people and technology to work hand in hand.”

Here are some other observations:

”The right level of detail at which to analyze the potential impact of automation is that of individual activities rather than entire occupations.

Every occupation includes multiple types of activity, each of which has different requirements for automation.

Given currently demonstrated technologies, very few occupations—less than 5 percent—are candidates for full automation. However, almost every occupation has partial automation potential, as a proportion of its activities could be automated.

We estimate that about half of all the activities people are paid to do in the world’s workforce could potentially be automated by adapting currently demonstrated technologies. That amounts to almost $16 trillion in wages.

The activities most susceptible to automation are physical ones in highly structured and predictable environments, as well as data collection and processing.

In the United States, these activities make up 51 percent of activities in the economy, accounting for almost $2.7 trillion in wages. They are most prevalent in manufacturing, accomodation and food service, and retail trade.

And it’s not just low-skill, low-wage work that could be automated; middle-skill and high-paying, high-skill occupations, too, have a degree of automation potential.

As processes are transformed by the automation of individual activities, people will perform activities that complement the work that machines do, and vice versa.”

It certainly appears that automation will permit increased production with the same or fewer people; however, we also expect that new types of jobs requiring new skills will emerge.  

What new types of jobs will materialize as automation progresses?

And if automated jobs are not replaced with new types of jobs or occupations, will the Basic Universal Income become a reality?

Your thoughts?

Image courtesy of fatihhoca at FreeDigitalPhotos.net

Relevant links:

http://www.mckinsey.com/global-themes/digital-disruption/harnessing-automation-for-a-future-that-works

Read More

The Death of The Department Store and The Shopping Mall – Really?

In the first week of January 2017, Sears announced the closing of 150 stores by spring. According to Hayley Peterson writing on businessinsider.com on January 4 2017: “That means [Sears] will have fewer than 1,500 stores left . . . that's down nearly 60% from 2011, when Sears had more than 3,500 stores.”

In the same week Macy’s announced the closing of 68 stores. These closures are significant since many of these large stores are shopping mall anchor tenants.

Forbes.com reports that:

“[In the 20 years] since 1995, the number of shopping centers in the U.S. has grown by more than 23% . . . while the population has grown by less than 14%. Currently [in 2015] there is close to 25 square feet of retail space per capita (roughly [double that], if small shopping centers and independent retailers are added). In contrast, Europe has about 2.5 square feet per capita.”

Clearly, retail space in the US has been over developed and will shrink further with the onslaught of on-line shopping.

However, I think there is more to the story that the convenience of on-line shopping combined with the emergence of discount merchants and big box stores, are the reasons for the decline of traditional retailers and malls.

The rise of social media also has had an impact. Whereas people, particularly teens, used to go to malls to meet with their friends they can now connect with others instantaneously via Facebook for example, without having to travel to a physical location for this social interaction.

With 1.18 billion daily Facebook users (about one out of every six people on the face of the Earth) the number of people you can connect with has increased exponentially.

So . . . why bother going to the mall if you can connect with all your friends without having to physically transport yourself . . . particularly if you can do most of your shopping online?

What’s the poor mall owner to do?

The answer, according to Kate Taylor writing on businessinsider.com on January 23 2016 is this:

“To compete with online shopping, malls need to match e-commerce in convenience and create experiential reasons to visit the mall that you cannot find online.”

Some malls are already re-inventing themselves.

“Touch-screen platforms that provide customer information are becoming an increasingly common and interactive feature.

For example, YunTouch uses face recognition technology to collect and analyze customers’ past purchases when they stop by a digital display terminal.

In the US, Ralph Lauren is testing interactive mirrors in fitting rooms that allow shoppers to change the lighting, request different sizes, browse through other items, or interact with a sales associate.

The other major shift in the mall of the future is in customers’ own hands — their smartphones.

With the chance to connect, some malls are now texting shoppers. Shanghai’s Cloud Nine and Shenzhen’s SEG Plaza are utilizing social-messaging app WeChat for their news and loyalty programs, connecting with customers even when they aren’t at the mall.

In the US . . . Macerich [who owns and operates regional shopping malls in the US has some] locations [that] allow shoppers to text questions to the mall’s information desk to get speedy and convenient answers.

In closing . . .

“New technology helped contribute to the decline of malls in America, as shoppers turned to e-commerce. However, today the tides are turning. Now, with new experiential and smartphone tech, retailers have the chance to use technology to reverse the downfall of the mall.”

Your thoughts?

Image courtesy of Versionphotography at  FreeDigitalPhotos.net

Relevant links:

stock/p://www.businessinsider.com/list-of-sears-and-kmart-stores-closing-2017-1

http://money.cnn.com/2017/01/04/news/companies/macys-job-cuts-stock/

http://www.forbes.com/sites/robinlewis/2015/03/17/retail-in-2015-a-reality-check/#3d942b0873b0

http://www.businessinsider.com/what-the-mall-of-the-future-looks-like-2016-1

http://fortune.com/simon-mall-landlord-real-estate/

 

Read More

Does Practice Make Perfect? Sometimes!

In her article titled “Debunking the Myth of the 10,000-Hours Rule: What It Actually Takes to Reach Genius-Level Excellence”, Maria Popova discredits the pop-psychology rule of “10,000” as the number of hours of practice required to perform exceptionally well in any field of endeavour … be it music, sports, acting or becoming an excellent chef.

The “10,000 hour” rule is only half true.

What is required, over and above repetition, is the means to monitor and adjust your execution so that you move towards your goal of total excellence.

The main predictor of excellence is what can be described as ‘deliberate practice’ … a mindset requiring your concentration in addition to your time.

This is accomplished through some sort of feedback or system for correcting yourself … a coach, skilled expert or mentor can be used. In the case of a musician this can also be accomplished by listening critically to recordings of their own performance and comparing it to those of a master performer.

People who become expert in their field do so by concentrating in each practice session on improving a single aspect of their performance that an expert has identified as needing further development.

The feedback loop is important in spotting errors and correcting them as they occur. Practice without such feedback will not contribute to success in any field.

Paying full attention to what you are doing, and correcting faults along the way, is essential for getting to the point where you feel that executing some specific aspect of your performance is no longer work and that ‘it comes naturally’.

But here is where even more self-discipline is required …

You must switch from continuing on autopilot repeating what you already know or are good at, to monitoring and correcting another aspect of your performance.

If at any point you stop your disciplined practice, and simply continue reinforcing what you have already mastered, your skill level will plateau and you will cease making progress.

And as a final note, world-class performers in any field often limit their disciplined and focussed practice to about four hours a day. This appears to be the optimal time for maintaining optimal concentration and focus on improvement in a specific area.

Sounds like a plan … doesn’t it?

For the original article click:

https://www.brainpickings.org/2014/01/22/daniel-goleman-focus-10000-hours-myth/

 

Read More

What Happens to Merchandise Returned After Christmas?

Writing in Quartz (qz.com) on December 29 2016, Marc Bain, Sarah Slobin and Michael Treb explain:

The season of giving is over. It’s followed by the season of returns.

During the first week of January, US consumers will send nearly $30 billion in products back to where they came from, returning 9% of all e-commerce purchases.

[For UPS] that adds up to 5.8 million packages in transit [in the first week in January], peaking January 5th, a day that UPS has nicknamed National Returns Day.

But here’s something you probably didn’t know: Many of those returns aren’t going to make it back into store inventory and onto shelves.

Instead, they will rack up a giant carbon footprint as they wind their way through a network of middlemen and resellers and, at each step, a share of those goods will be discarded in landfills.

“It’s a huge environmental impact,” says Tobin Moore, cofounder and CEO of Optoro, a technology company focused on improving the “reverse logistics” of consumer returns. “It’s over 4 billion pounds of [landfill] waste generated a year in the US from reverse logistics.”

Optoro, which UPS recently bought a stake in . . . estimates that just 50% of returns go back into store inventory. Because of their condition, due to use, damage, or even just opened boxes, the rest have a different fate. Stores may be able to return some to their manufacturer, or resell them through their own outlets.

But often they sell them at a fraction of their original cost to discounters or massive, centralized liquidators, who buy truckloads of inventory that they sort and resell to other middlemen before they land at secondhand shops.

At each step, if it’s more economical to throw an item away rather than ship it, off to the dumpster it goes. Big retailers toss out huge quantities of inventory each year.

Moore says that many people mistakenly think their return will simply be resold, so they use free shipping/easy returns to effectively rent products, like TVs for Superbowl parties or power drills for moving into a new home.

Unfortunately, not everything you send back will have a second life. That’s something to think about next time you hit send from that virtual shopping cart.

Your thoughts?

Relevant links:

http://qz.com/873556/returned-gifts-are-creating-an-environmental-disaster/

https://www.pressroom.ups.com/pressroom/ContentDetailsViewer.page?ConceptType=PressReleases&id=1482804623017-283

http://www.wsj.com/articles/ups-takes-a-stake-in-retail-returns-specialist-optoro-1482278400

https://www.wired.com/2015/02/high-end-dumpster-diving-matt-malone/

Read More

Reports of The Death of Film Photography Are Greatly Exaggerated!

Digital cameras have not killed film . . . well, not entirely. Some people still prefer film for image capture . . . and for good reasons!

In an article in the Los Angeles Times on August 4th 2014, Martin Scorsese, director of films such as “New York, New York”, “Raging Bull” and “Gangs of New York” was quoted, saying:

“Everything we do in [digital high definition when making a movie] is an effort to recreate the look of film. Film, even now, offers a richer visual palette than [high definition]. We have to remember that film is still the best and only time-proven way to preserve movies.”

This is why he and other directors continue to use traditional film when making a movie.

Before you throw out your old 35 millimeter film camera, consider three things.

First, what happens to the pictures you take? 

How many of you have your parents’ albums containing pictures shot on film? I do . . . and I think you would agree there are some good memories in those photo albums.

Today, given the proliferation of cameras on our portable devices, we are taking more photos than ever …but how many of these images will still be available for viewing in 30 years? Most, I believe will have been erased or forgotten, or the digital images will have been corrupted. Film pictures are permanent whereas digital images can be easily lost or destroyed.

Second, shooting with film encourages you to be patient.

Writing on fstoppers.com, David Geffin explains;

“[I]t’s far more worthwhile to wait, watch, direct a little and have a clear vision in your head AHEAD of what you shoot, rather than shooting and looking at images, trying to work out what you were trying to say. Shooting film is a cure for the over-shoot-because-we-can digital sickness.”

And third, a completely manual film camera forces you to understand what each part does and what you must do to get the picture you want.

With today’s digital cameras it is all too easy to put the camera into a pre-set or automatic mode and start taking pictures, letting the camera make all the decisions for getting the right exposure, but not necessarily the image you want.

And with a manual film camera you will have to make decisions about the length of exposure and aperture (or how ‘wide’ the lens opens) to get the correct depth of field for your picture.

All the learning from taking pictures with a manual film camera can be transferred to your digital camera, allowing you to take exactly the picture you want, instead of leaving it up to the camera.

Your thoughts?

Relevant articles:

http://www.latimes.com/entertainment/envelope/cotown/la-et-ct-martin-scorsese-voices-support-for-kodaks-continued-film-production-20140804-story.html

https://fstoppers.com/education/why-ive-gone-back-shooting-filmand-why-you-should-too-30630

http://www.latimes.com/entertainment/envelope/cotown/la-et-ct-kodak-hollywood-studios-20140731-story.html

 

Read More

Artificial Intelligence & The Destruction of Art And Music As We Once Knew It

Famous paintings and classical compositions have brought joy to countless millions of people over the ages. Rembrandt and Johann Sebastian Bach are only two examples of past masters.

Imagine that modern technology, driven by artificial intelligence, could produce new works by these same masters that experts could not identify as copies of the original artist’s or composer’s style.

Well, that day has arrived.

Here are some recent developments you should be aware of.

Writing about Rembrandt, in theverge.com on April 5 2016, Alessandra Potenza said:

“A new Rembrandt painting was unveiled today in Amsterdam. But the portrait wasn’t exactly made by the 17th century Dutch master; it was created with 3D printers by a team of data analysts, developers, and art historians.

The painting, called “the New Rembrandt” was developed by the Amsterdam-based advertising agency J. Walter Thompson for its client ING Bank, and took 18 months to create. To reproduce Rembrandt’s painting style and brushstrokes, a unique software and facial recognition algorithm were used to analyze digital representations of all of his 346 known paintings. The data was then fed to a 3D printer, which released 13 layers of paint-based UV ink onto a canvas to recreate the painting texture similar to a real Rembrandt. The final artwork, which was realized also with help from Microsoft, is made of more than 148 million pixels.”

It is very difficult for experts to conclude that this computer generated “painting” was a fake.

And about the famous composer Bach, the following from technologyreview.com on December 14 2016 from the Emerging Technology file:

“Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in Paris . . . have developed a neural network that has learned to produce choral cantatas in the style of Bach. They call their machine DeepBach.

[They said:] After being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach . . . about half the time, these compositions fool human experts into thinking they were actually written by Bach.”

Here are three questions we have to ask ourselves about the future of art and music:

  1. When new works of art and music can be produced in the style of past masters, will the value of their original works be diminished?
  2. Will artificial intelligence allow us to improve or enhance a composer’s original works to be even more pleasing to the ear?
  3. Will new forms of music, yet unknown to us, evolve through advances in artificial intelligence?

Your thoughts?

Relevant articles:

http://www.theverge.com/2016/4/5/11371566/the-next-rembrandt-3d-printed-painting-ing-microsoft

https://www.technologyreview.com/s/603137/deep-learning-machine-listens-to-bach-then-writes-its-own-music-in-the-same-style/

Read More

Job Destruction and Universal Basic Income (UBI)

At the beginning of December 2016 Amazon announced a new technology it has been testing at its prototype Amazon Go store in Seattle . . . a small retail store offering ready-to-go food items and basic groceries. There is no check-out.

Upon entry to the store you scan an app on your phone, just like you would scan a digital boarding pass when checking in for a flight. Then take whatever you need from the shelves and simply walk out.

While you are in the store, on-shelf sensing and computer vision technology tracks your every move and records items you pick up and put in your bag.

And, if you change your mind just put the item back on the shelf. The system dutifully records that you returned it.

On your way out, the system double checks the contents in your bag. Your items are then charged to your Amazon account.

Amazon Go stores will start opening to the U.S. public in early 2017.

According to the Bureau of Labour Statistics, about 3.5 million people in the US work as cashiers. Are these jobs going to be lost permanently once Amazon’s new technology is widely adopted in the retail sector? Only time will tell.

Automation of many processes and jobs is a given in our society. The likelihood of permanent job losses without sufficient numbers of new jobs being created to make up for this loss is a very real possibility. So how do people acquire or earn money to survive?

In his article on theguardian.com on December 9 2016, Tim Dunlop makes the following observations:

This sort of economy is also a recipe for massive inequality and insecurity. Platforms like Uber or Amazon Go, because they need so few workers, tend to funnel the wealth they generate to owners and investors rather than distribute it broadly via wages.

The role of government therefore becomes one of equalisation, of finding ways to see that the wealth generated in the new economy doesn’t simply flow to a tiny number of people at the top of the new corporations.

The most efficient way for governments to do this is by the mechanism of a universal basic income, a guaranteed wage for everyone, that not only provides a financial floor below which no one can fall, but allows us to redefine the sort of work we do and find meaningful.

That is to say, by breaking the link between survival and work, [Universal Basic Income] allows us all to not only benefit from the technology, but to reinvent what we even mean by the concept of work.

What are your thoughts about a Universal Basic Income?

Relevant links:

https://www.theguardian.com/sustainable-business/2016/dec/09/amazon-go-means-more-than-just-job-losses-it-will-restructure-the-economy

Read More