Dunvegan Thought Spot

In the research The Dunvegan Group conducts to support our CCR (Customer Care & Retention) programs, we discover articles, blog posts and videos which, although not directly related to our work, are thought provoking or concern matters you may want to think about.  ‘Thought Spot’ covers a broad range of subjects.

The posts in ‘Thought Spot’ are selected by Olev Wain, Ph.D., VP of The Dunvegan Group. 

We welcome your feedback!


 

 

DNA Hacking – It’s Here!

Andy Greenwood made the following observations in his August 10 2017 article on wired.com titled “Biohackers Encoded Malware In A Strand of DNA” (excerpt):

When biologists synthesize DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease.

But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer.

While that attack is far from practical for any real spy or criminal, it's one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems.

And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment.

“That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists.

But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic.

Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing.

Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest.

"There are a lot of interesting—or threatening may be a better word—applications of this coming in the future," says Peter Ney, a researcher on the project.

Your thoughts?

Image courtesy of monsitj at FreeDigitalPhotos.net

Relevant link:

https://www.wired.com/story/malware-dna-hack/

Read More

Millennials and Entrepreneurs – A Common Thread

In a previous post, I wrote that Millennials had more in common with previous generations than not.

A couple of weeks later I ran across an article by Sue Hawkes on eosworldwide.com on August 7 2017 titled “Top 5 Traits Millennials Share With Entrepreneurs” in which she outlined the common elements (edited version):

When I stopped to think about the common millennial characteristics we hear about so often, I realized how many of those same traits are also prevalent among entrepreneurs.

How we outwardly demonstrate these traits may look different, but at the core our values are shared. I believe this is an opportunity for tremendous results if managed from a place of shared values and effective communication.

Too often, we get bogged down by the way we’ve categorized others – in this case an entire generation.

As a result, the differences become all we see.

Let's begin instead from the common ground we share, while still acknowledging and appreciating the differences. Only then will we begin to gracefully communicate through the tough stuff and fully realize the value we all bring to the table.

Common Characteristics

These are the top five attributes shared by millennials and entrepreneurs. Imagine beginning your exploration from one of these vantage points:

1. They desire to change the world

Both millennials and entrepreneurs are driven by a higher purpose. They want to change the world and are concerned about issues facing our communities and the planet as a whole.

Because they have been globally connected their entire lives, millennials are aware of the challenges that need solutions.

Like entrepreneurs, they are civic-oriented. They want to make a big dent in the universe and believe they have the ability to do so.

2. They want to design life on their terms

Millennials often get labeled as entitled for desiring to set their own hours, to have flexibility to work from home and to be judged on results, not on the time spent doing their work.

They do not believe in a clear distinction between work and home life, choosing instead to integrate the two. They bring their lives to work and their work back home to their lives. 

Entrepreneurs also function this way, wanting the freedom to make their own rules, set their own schedule, and create a career that fits the way they want to live their life.

What entrepreneurs know that millennials may not yet realize is that this lifestyle is not easy. People who design their own life will often work longer and harder than people who choose a more traditional 8:00 am - 4:00 pm corporate career. Both groups do so willingly because they desire freedom – or the illusion of it!

3. They value relationships

The best business is done with those we know well, enjoy spending time with, like and trust. Millennials and entrepreneurs realize this, and often do business with friends or become friends with their clients.

Again, there is no separation between “work colleagues” and “friends.” Because the lines are blurred, doing business is easier.

They also value the importance of relationships and enjoy investing in them. Most millennials and entrepreneurs place relationships above money or being right. Millennials are team-oriented and build relationships with colleagues; entrepreneurs are expert networkers and connectors of people and opportunities.

4. They don’t accept the status quo

Neither millennials nor entrepreneurs enjoy following rules just for the sake of following rules, especially if they themselves didn’t make the rules.

They will ask why things are done the way they are, how things could be improved.

They often challenge the existing structure if it doesn’t make sense to them or move to innovate a new, better solution.

Neither are satisfied with the status quo, and both dislike rules if they don’t see the meaning behind them.

Both groups are progressive thinkers, work to make things better, and see opportunities where others see obstacles.

5. They have an insatiable hunger to learn

Millennials are the most educated generation in history, and entrepreneurs are life-long learners.

Both groups are eager to learn new things and master the skills that will help them improve and succeed.

Appreciation for non-traditional learning is also common in both groups, whether it be through travel, self-teaching methods or apprenticeships.

By continuously expanding their knowledge and skill set, millennials and entrepreneurs use new learning to innovate, create new opportunities, and grow.

Find Common Ground

Millennials and entrepreneurs share many traits.

What’s different, however, is the communication around them and how they are expressed in action.

To work better together and capitalize on this common ground, it’s important that we focus on our shared values, moving beyond generalizations and negative connotations of any group of people while remaining open and curious. We have more to gain by working together than we do by gathering frustration with how different we are.

How can we move from seeing the barriers between us to a place of common ground and opportunity?

Begin with conversations; courageous, open-minded, open-ended conversations that expand what each person brings and maximizes that in concert with the others present. 

Your thoughts?

Image courtesy of StockPhotoAstur at FreeDigitalPhotos.net

Relevant link:

https://blog.eosworldwide.com/blog/millennials-entrepreneurs-shared-values?utm_campaign=EOS%20Blogs&utm_source=hs_email&utm_medium=email&utm_content=55054640&_hsenc=p2ANqtz-9irzxFkBIoif5PQVr8oMXhZ-KbFKiBv7N4SsDcfEpiPR4rbl6zJAgWdDLS0ZNbTtvO44FcJqT4CI_-PtMsSoX_c9jixOKLDse5Tk417xHkYnpmzsE&_hsmi=55054640

Read More

Are Millennials Getting A ‘Bum Rap’? Consider These Facts!

In an interview in December 2016, Simon Sinek said the following about millennials:

Apparently, millennials as a group of people, which are those born fromapproximately 1984 and after, are tough to manage. They are accused of being entitled and narcissistic, self interested, unfocused and lazy - but entitled is the big one.

We all know some millennials who fit this profile, but is it true that most of them are like this? And what about previous generations – weren’t there some people who also had this profile?

Writing on bbc.com on 16 July 2017, in an article titled ‘Why the millennial stereotype is wrong”, Jessica Holland brought some interesting data to the table:

Type “millennials are” into a Google search bar, and you’ll find that “lazy” comes up as one of the top three autocompletes.

The common perception is that members of the generation born between the early 1980s and late 1990s are easily bored, crave instant gratification and would rather hop from gig to gig than stay with one company throughout their working lives. Not exactly dream employees, in other words.

But comprehensive studies in both the US and UK this year have shown the opposite is the case. It turns out, millennials are just as committed as their elders were at the same age, if not more so. What’s more, they’re not being rewarded for that loyalty.

British think tank the Resolution Foundation reported in February that only one in 25 UK millennials was switching jobs each year during their mid-twenties.

Members of the preceding generation, known as Generation X, were found to be twice as likely to keep switching employers at the same age – a good thing for them, financially speaking. Job-hopping tends to come with a pay rise of about 15% with each move, as well as the opportunity for workers to learn new skills and determine which kind employers are a good fit for them.

Meanwhile, pay rises for those who stay with one company for the long term have dwindled to almost nothing, according to the Resolution Foundation report.

The trend is evident, not just in the UK, but elsewhere.

In April, the Pew Research Center, a non-partisan “fact tank” based in Washington, DC, published similar findings, drawn from US Department of Labor data. The report found that American workers aged 18 to 35 were just as likely to stick with their employers as their older counterparts in Generation X were when they were young adults. And among those with college degrees, millennials were found to have longer track records with their employers than Generation X workers did when they were the same age.

“The economic evidence is pretty clear,” says Laura Gardiner, senior policy analyst at the Resolution Foundation and one of the authors of the report on UK millennials’ decreasing job mobility. “Young people have always changed jobs more than older people, but it’s definitely the case that the rate of mobility has fallen – for young people particularly quickly, although it’s fallen for everyone.”

The fact that young people are job-hopping less, she adds, is “a big determinant of why, for the first time in living memory, young people are earning no more now than previous generations were at the same age 15 years before.”

Changing times

Neither report gives concrete answers about why job mobility has decreased among young people. Richard Fry, a senior researcher at Pew, wrote in his summary of the data on Pew’s Fact Tank blog, that it may be “due to a dearth of opportunities to get a better job with a different employer.”

Gardiner, meanwhile, points out that young people may be less willing to take risks having come of age during the financial crisis. There’s also the rise of zero-hours contracts and agency work, and the fact that a shift is happening in Britain “to a service economy from a manufacturing economy.” All this, she says, “may have reduced people’s confidence or bargaining power.”

According to researchers at Deloitte, which publishes an annual survey of millennial attitudes, recent political and social instability in the developed world has made young people’s desire for security even more pronounced in just the last 12 months.

The 2017 survey, for which 8,000 millennials were interviewed worldwide, shows that millennials in developed countries are less willing now to leave their jobs within two years, and more eager to stay for five or more years, than they were a year ago. “Our data suggests that these uncertain times might be driving a desire among millennials for greater stability,” the report reads.

Against this backdrop – fewer long-term jobs that come with regular pay rises, more anxiety due to experience of the global financial crash, and more worries about the future – millennials are also hitting the age at which they’re making plans to buy houses, get married and have kids.

In 2015, millennial women accounted for just over eight in 10 US births, according to Gretchen Livingston at the Pew Research Center, so it’s unsurprising that this group is focusing on their financial stability.

Desires and stereotypes

Stereotypes about millennials suggest they’re not interested in old-fashioned markers of success.  But when it comes to the fundamental desire for these basic anchors – a home, retirement savings, a decent career, a family – “there’s strikingly little difference between the generations,” Gardiner says.

Jennifer Deal, a senior research scientist at the Center for Creative Leadership in San Diego, California and author of What Millennials Want from Work agrees. “I don't see different values among the generations,” she says. “They may have different ways of expressing their values, but what they want in life and work is pretty similar.”

With house prices rising, and university education getting more expensive in many countries, these goals are out of reach for many millennials. That’s another possible contributing factor for this generation’s desire to stay rooted with one employer.

Seven in 10 millennials living in mature economies, according to the 2017 Deloitte Millennial Survey, would prefer to be in full-time employment, rather than freelance work, and the reasons most often given for this preference are “job security” and a “fixed income.”

“Perspectives among young people have changed since the 1970s,” Deal says; “the world has changed.” But the lament that young people lack commitment is a “typical stereotype of young people. We saw the same stereotyping of Gen Xers when they were new to the workforce.”

If anything is holding millennials back, the evidence suggests, it may be the unprecedented environment in which they find themselves, and not their attitudes to work.

Your thoughts?

Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net

Relevant links:

https://www.youtube.com/watch?v=hER0Qp6QJNU

http://www.bbc.com/capital/story/20170713-why-the-millennial-stereotype-is-wrong

Read More

Learning To Love Your Cobot!

Wikipedia defines “A cobot or co-robot (from collaborative robot) [as] a robot intended to physically interact with humans in a shared workspace. This is in contrast with other robots, designed to operate autonomously or with limited guidance, which is what most industrial robots were up until the decade of the 2010s.”

Writing on Investment U Plus on Wednesday July 19, 2017, Matthew Carr, Emerging Trends Strategist for The Oxford Club, writes (edited version):

While U.S. manufacturing output continues to hit record highs, the total number of manufacturing jobs in the U.S. has declined by 30% in the past couple of decades.

People are being replaced by machines. And it's going to only pick up speed.

In North America last year, businesses ordered 35,000 robots. That was a 10% increase over 2015.

Of those robots purchased, the automotive sector accounted for 20,000. Not that long ago, 80% of the work that went into manufacturing a car was done by humans. Today, 80% of that work is done by robots.

What we've seen over the last couple of years is a swift uptake in job automation. We hit a tipping point, and the speed will only increase.

In 2015, the money spent globally on robots was $71 billion. By 2019, spending is expected to total $135.4 billion. And we'll see a compound annual growth rate of 17%.

In 2015, sales of robots jumped 15%. This was the biggest increase recorded in a single year.

According to the International Federation of Robotics and the Swiss robotics company ABB, the global population of working robots today is 1.2 million. It's projected to increase to 2.6 million by 2019.

Over the next decade, the population of industrial robots is projected to increase 300% in the U.S. And worldwide shipments of industrial robots will triple by 2025.

It's easy to see why. Back in 2010, an industrial robot cost an average of $150,000.

By 2015, the average cost declined by 83% to $25,000.

At the same time, there's been a push for higher minimum wages for human workers, as well as demand from companies to increase efficiencies, productivity and reduce costs to remain competitive.

Today, it's not just manufacturing jobs that are set to be replaced by automation and robots.

For example, there are multiple "burger bots" on the market.

Once again, demand, the need for better efficiencies and productivity, and the opportunity to reduce costs are the big drivers.

Last year, the top burger chains in the world notched $75.5 billion in sales.

At the same time, restaurant worker turnover hit an all-time high of 113%.

Momentum Machines' "burger bot" can produce 400 burgers per hour. It's fully autonomous. It slices toppings, grills patties, and can assemble and bag a finished burger.

It could potentially replace two to three full-time line cooks. The savings to a restaurant is estimated to be $90,000 per year in training, salaries and overhead costs.

On the other side of the table is Miso Robotics' "Flippy" burger bot.

Flippy is what's called a collaborative robot - or a "cobot." It's designed to work with people. The robot is driven by an AI system, so it can constantly learn and adapt. And its job is simply to be a line cook. It cooks burgers and plates them on a bun, but it leaves the finishing touches for humans.

Flippy will begin rolling out to 50 CaliBurger locations starting in 2018.

Now, cobots, like Flippy and ABB's YuMi, are potentially one of the biggest automation-related markets. Right now, cobots account for 5% of the global robot population.

But that market is expected to grow from $100 million to $3 billion by 2020. And an MIT study recently found that human-robot teams were 85% more productive than either alone.

According to the McKinsey Global Institute, 90% of jobs can't be fully automated. That means that humans and robots will increasingly need to work together. And ironically, ABB is having to hire three people per day to meet the rising demand for its robots.

Are you ready for your 'new-best-friend' the cobot?

Your thoughts?

Image courtesy of CharlieAJA at FreeDigitalPhotos.net

Relevant links:

https://en.wikipedia.org/wiki/Cobot

http://www.investmentu.com/article/detail/55670/job-automation-not-just-factory-workers-burger-flipping?src=email

Read More

Obesity – Worse Than Smoking? Probably!

Writing on Mercola.com on June 28, 2017, Dr. Mercola summarized statistics on obesity. The numbers are eye-opening!

According to research published in 2013, 1 in 5 American deaths is associated with obesity, and the younger you are, the greater obesity's influence on your mortality.

Considering one-third of American children between the ages of 2 and 19 are now overweight or obese, chronic disease and mortality rates will likely climb dramatically in coming decades as the health of these youths begins to fail.

Since 1980, childhood obesity rates have tripled in the U.S., the rate of obese teens has quadrupled from 5 to 20.5 percent, and nearly 9 percent of 2- to 5-year-olds are now obese.

As of 2014, the obesity rate among adults over 20 was just shy of 38 percent, costing the U.S. medical system $190 billion annually.

In December 2011, severe obesity was included as a qualifying disability under the American With Disabilities Act, further raising the cost of obesity on society as a whole.

Being overweight during pregnancy also increases the risk of birth defects, recent research warns, and the more obese the mother, the greater the risk.

More than half of all Americans also struggle with chronic illness - a truly shocking statistic when you consider modern health care is supposed to be the best mankind has ever been privy to. It really says a lot about the influence lifestyle wields on your health, and the price we pay for convenience.

Data collected from tens of thousands of Canadians confirms obesity surpasses smoking in terms of creating ill health, and Dutch researchers recently predicted obesity and inactivity will overtake smoking as a leading cause of cancer deaths specifically.

Processed foods shoulder the greatest blame for this trend. Many children are raised on fast food from the time they're able to eat solid foods, and are given sugary sodas and juices at even younger ages.

The vast majority of people on the planet who eat a primarily processed food diet are burning carbohydrates as their primary fuel, which has the devastating effect of shutting down your body's ability to burn fat.

This is why obesity is so prevalent, and why so many find it nearly impossible to lose weight and keep it off.

Your thoughts?

Image courtesy of hayesphotography at FreeDigitalPhotos.net

http://articles.mercola.com/sites/articles/archive/2017/06/28/obesity-global-epidemic.aspx?utm_source=dnl&utm_medium=email&utm_content=art1&utm_campaign=20170628Z1_UCM&et_cid=DM148752&et_rid=2060414843

Read More

Ocean Trash and Space Junk – Which Is Worse?

We are all aware of the amount of trash, mostly plastic, that has accumulated in the world’s oceans.

In January 2015, National Geographic reported that (edited):

There are 5.25 trillion pieces of plastic debris in the ocean. Of that mass, 269,000 tons float on the surface, while some four billion plastic microfibers per square kilometer litter the deep sea.

Though scientists know a great deal about the damage to marine life caused by large pieces of plastic, the potential harm caused by micro plastics is less clear. What effect do they have on fish that consume them?

These micro plastics can come in the form of microbeads that have been used in exfoliating products and toothpaste (now banned in Canada and the US), often described as rinse-off cosmetics.

These microbeads range from 10 microns to 1 millimeter in size; to put this in perspective, a human hair is about 100 microns in diameter, or about one-tenth of a millimeter.

There is public awareness of ocean trash but the public’s awareness of space junk and its implications is almost non-existent.

Wikipedia describes space junk as (edited):

Space debris, junk, waste, trash, or litter is the collection of defunct man-made objects in space – old satellites, spent rocket stages, and fragments from disintegration, erosion, and collisions – including those caused by debris itself. As of December 2016 there have been 5 satellite collisions with space waste.

There is cause to be concerned about the amount of space junk orbiting the Earth. Wikipedia continues:

The Kessler syndrome, proposed by the NASA  scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions.

One implication is that the distribution of debris in orbit could render space activities and the use of satellites in specific orbital ranges infeasible for many generations.

Think about the communications satellites (e.g., those used for telephone and internet communication) that would be threatened when junk, travelling at speeds in excess of 17,000 kilometers an hour, crosses the satellites’ orbital paths. At such high speeds objects as small as half an inch across have the potential for demolishing a large satellite

Seeing is believing, so why don’t you watch the 11-minute video called “Adrift” describing this junk yard above our heads:

https://aeon.co/videos/space-junk-is-a-calamity-in-the-making-and-a-threat-to-anyone-venturing-off-earth

Should we be as concerned about space junk as we are about ocean trash? I think so!

Your thoughts?

Image courtesy of Petrovich9 at FreeDigitalPhotos.net

Relevant links:

http://news.nationalgeographic.com/news/2015/01/150109-oceans-plastic-sea-trash-science-marine-debris/

https://en.wikipedia.org/wiki/Space_debris

https://en.wikipedia.org/wiki/Kessler_syndrome

Read More

Old Diseases Are Emerging from The Arctic

Temperatures in the Arctic Circle are rising rapidly.

In the summer months, the permafrost is melting to depths greater than the usual 20 inches.

With the Arctic ice cap in retreat, Russia’s northern coast is experiencing an upsurge in economic activity with new mining and drilling operations.

There is concern that infectious agents will be released.

This has already happened.

Writing on bbc.com on May 4 2017, Jasmin Fox-Kelly observed:

Climate change is melting permafrost soils that have been frozen for thousands of years, and as the soils melt they are releasing ancient viruses and bacteria that, having lain dormant, are springing back to life.

In August 2016, in a remote corner of Siberian tundra called the Yamal Peninsula in the Arctic Circle, a 12-year-old boy died and at least twenty people were hospitalised after being infected by anthrax.

The theory is that, over 75 years ago, a reindeer infected with anthrax died and its frozen carcass became trapped under a layer of frozen soil, known as permafrost. There it stayed until a heatwave in the summer of 2016, when the permafrost thawed.

This exposed the reindeer corpse and released infectious anthrax into nearby water and soil, and then into the food supply. More than 2,000 reindeer grazing nearby became infected, which then led to the small number of human cases.

Drilling and mining activity is also exposing material that has been frozen for thousands of years. Many pathogens have survived in a frozen state and have been shown to be resistant to modern antibiotics.

Should we be worried?

There are two schools of thought (edited).

One argument is that the risk from permafrost pathogens is inherently unknowable, so they should not overtly concern us.

Instead, we should focus on more established threats from climate change. For instance, as the Earth warms northern countries will become more susceptible to outbreaks of "southern" diseases like malaria, cholera and dengue fever, as these pathogens thrive at warmer temperatures.

The alternative perspective is that we should not ignore risks just because we cannot quantify them.

There is now a non-zero probability that pathogenic microbes could be revived, and infect us.

How likely that is, is not known, but it's a possibility. It could be bacteria that are curable with antibiotics, or resistant bacteria, or a virus. If the pathogen hasn't been in contact with humans for a long time, then our immune system would not be prepared.

Your thoughts?

Image courtesy of bodym at FreeDigitalPhotos.net

Relevant link:

http://www.bbc.com/earth/story/20170504-there-are-diseases-hidden-in-ice-and-they-are-waking-up

Read More

3D Printing – What Are The Current Limits?

 What is 3D printing? 

3dprinting.com describes it as:

 The creation of a 3D printed object is achieved using additive processes. In an additive process an object is created by laying down successive layers of material until the object is created. Each of these layers can be seen as a thinly sliced horizontal cross-section of the eventual object.

It all starts with making a virtual design of the object you want to create. This virtual design is for instance a CAD (Computer Aided Design) file. This CAD file is created using a 3D modeling application or with a 3D scanner (to copy an existing object). A 3D scanner can make a 3D digital copy of an object.

Much has been written about the promise of 3D printing or manufacturing; however, there are practical and economic limits.

Writing on hbr.org (Harvard Business Review) on 23 June 2015, Matthias Holweg points out some limitations:

3D printing simply works best in areas where customization is key — from printing hearing aids and dental implants to printing a miniature of the happy couple for their wedding cake.

Using a combination of 3D scanning and printing, implants can be customized to specific anatomic circumstances in a way that was simply not feasible beforehand.

However, we also know that 99% of all manufactured parts are standard and do not require customization. In these cases, 3D printing has to compete with scale-driven manufacturing processes and rather efficient logistics operations.

The simple fact is that when customization isn’t important, 3D printing is not competitive.

A second point often overlooked is the labor cost that remains. Counter to common perception, 3D printing does not happen “at the touch of a button”; it involves considerable pre- and post-processing, which incur non-trivial labor costs.

Printing metal parts also remains a challenge. As David Rotman explained on technologyreview.com on 25 April 2017:

Making metal objects using 3-D printing is difficult for several reasons. Most obvious is the high temperature required for processing metals.

The most common way to print plastics involves heating polymers and squirting the material out the printer nozzle; the plastic then quickly hardens into the desired shape.

The process is simple enough to be used in 3-D printers that sell for around $1,000.

But building a 3-D printer that directly extrudes metals is not practical, given that aluminum melts at 660 °C, high-carbon steel at 1,370 °C, and titanium at 1,668 °C.

Metal parts also have to go through several high-temperature processes to ensure the expected strength and other mechanical properties.

Advances are currently being made in creating metal components using 3D laser-driven printing but the technologies are very expensive and slow compared to conventional manufacturing.

Nevertheless, 3D printing holds great promise.

Photoshop, for example, became commercially available in 1990 and has improved immeasurably in a quarter century.

I think 3D printing, particularly when creating metal objects, will see improvements at least as great as with Photoshop.

Your thoughts?

Image courtesy of hopsalka at FreeDigitalPhotos.net

Relevant links:

https://hbr.org/2015/06/the-limits-of-3d-printing

https://www.technologyreview.com/s/604088/the-3-d-printer-that-could-finally-change-manufacturing/?utm_source=MIT+Technology+Review&utm_campaign=6ab8a23d93-weekly_roundup_2017-05-04_edit&utm_medium=email&utm_term=0_997ed6f472-6ab8a23d93-154369405&goal=0_997ed6f472-6ab8a23d93-154369405&mc_cid=6ab8a23d93&mc_eid=1b51f1a96d

Read More

Our Hackable World: At What Cost?

Hardly a week goes by without our reading about a serious data breach at a large corporation or government agency.

Financial institutions are also experiencing theft of funds through hacking. For example, Wikipedia reports that in February 2016 the central bank of Bangladesh had $101 million withdrawn from its account at the Federal Reserve Bank of New York and transferred to fictitious accounts around the world.

Although most of this money was not recovered, the situation could have been worse.

A $20 million transfer to Sri Lanka was blocked only because someone in one of the routing banks in the global SWIFT network for transferring funds saw a spelling error in the documentation and sounded the alarm. Otherwise this transfer would have gone through to the fictitious recipient.

You have to wonder about the vulnerability of systems for handling data and the transfer of money. Part of the explanation of how systems can be hacked is in how they are built.

The Economist Magazine on April 8 2017 explained:

Modern computer chips are typically designed by one company, manufactured by another and then mounted on circuit boards built by third parties next to other chips from yet more firms.

A further firm writes the lowest-level software necessary for the computer to function at all. The operating system that lets the machine run particular programs comes from someone else.

The programs themselves come from someone else again.

A mistake at any stage, or in the links between any two stages, can leave the entire system faulty—or vulnerable to attack.

Errors are also made in writing source code, which are the instructions that are compiled by a computer before executing a program. Even at a low error rate of one line in 1000, 1 billion lines of source code can initially have 1 million lines containing an error.

Getting each of those lines to interact properly with the rest of the program they are in, and with whatever other pieces of software and hardware that program might need to talk to, is a task that no one can get right first time.

Any of these errors, if detected, could potentially be exploited by a hacker.

According to the Cybersecurity Business Report on August 22 2016, the global cost of cybercrime is expected to reach $6 trillion annually by 2021.

The cybercrime cost prediction includes damage and destruction of data, stolen money, lost productivity, theft of intellectual property, theft of personal and financial data, embezzlement, fraud, post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data and systems, and reputational harm.

There are other costs not factored into this estimate; costs associated with the impact that fear, stress and anxiety have on those directly and indirectly affected by the crime

As an example, the Office of Personnel Management (OPM) for the US government, which manages information files for the civil service, was hacked some time before 2015.

This security breach involved over 21 million victims who had applied for government security clearances and who had undergone extensive background investigation … including names of family members, spouses and friends. All of this data was accessed by hackers.

In addition, the fingerprint files of 5.6 million federal employees were hacked … many of these employees have access to classified material and facilities and use their fingerprints as identification.

What price do we put on the fear, stress and anxiety these people experience in not knowing if or when or how this data will be used to exploit any vulnerabilities they have?

Your thoughts?

Image courtesy of matejm at FreeDigitalPhotos.net

Relevant links:

https://en.wikipedia.org/wiki/2016_Bangladesh_Bank_heist

http://www.economist.com/news/science-and-technology/21720268-consequences-pile-up-things-are-starting-improve-computer-security

https://en.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach

http://www.csoonline.com/article/3110467/security/cybercrime-damages-expected-to-cost-the-world-6-trillion-by-2021.html

http://www.cnbc.com/2016/02/05/an-inside-look-at-whats-driving-the-hacking-economy.html

http://cybersecurityventures.com/hackerpocalypse-cybercrime-report-2016/

Read More

Re-invent Your Business – Just Like The French Foreign Legion Did, Twice In Its History

As the circumstances or the environment your business operates in change, you must think about repositioning or re-inventing your company so it remains relevant to your customers’ changing needs.

This is precisely what the French Foreign Legion (FFL) did after the First World War and again in the 1960’s after participating in an attempted coup-d’état against French President Charles de Gaulle.

But first some background . . .

Founded in 1831, the French Foreign Legion was created for foreign nationals who were willing to undertake military service on behalf of France.

Paradoxically, the Legionnaires’ loyalty has always been to the Legion and NOT to France.

The original purpose of the FFL was to militarily protect and expand the French colonial empire in the 19th century.

The popular and lingering view of the FFL is the one depicted in the 1939 movie ‘Beau Geste’ starring Gary Cooper where men in blue uniforms are marching through the desert in North Africa when not fighting the enemy.

Many of these men had apparently joined the Legion to escape the long arm of the law in their native country or because of various personal problems in their private lives.

During the First World War, the FFL fought on many fronts but by the end of the war there was serious consideration being given to disbanding the Legion because of the high casualty rate they suffered during the war . . . there were not many of them left.

For the Legion to survive, a way had to be found to encourage enlistment.

As Robert Twiggers described the situation on aeon.com (edited):

Colonel Paul-Frédéric Rollet came to their rescue. He understood that, instead of offering a sanctuary for runaway convicts, legionnaires needed a new myth of belonging and self-sacrifice.

Rollet was a military genius who understood the inner symbolism of such things as heroic defeats, odd uniforms and lost limbs. For example, Sir Adrian Carton de Wiart, one of Britain’s most decorated officers and Admiral Horatio Nelson were missing hands or arms.  

Suggestively, Paul Rollet went into battle with just a rolled umbrella. He believed that a commander showed lack of faith in his men if he needed to be armed, and besides, it distracted from his real task of inspiring his soldiers to fight.

That Rollet seized on the heroic defeat of the Legion at Camarón fighting the Mexican Army in 1863, is no accident: men brought up to accept death and mutilation as the price for never being forgotten by their uber-family (the Legion) are stronger than those bribed with the comforting notions of victory and glory.

Rollet knew that an army doesn’t march on its feet, or even its stomach. It marches on the stories it tells itself. So he made sure that the Legion was full of traditions and stories and rituals.

He also turned a few marching songs into full-blown anthems. However tough, legionnaires must learn to sing with gusto the songs of former warriors.

Other armies don’t really do this, nor do the officers bring the men breakfast once a year (on Camarón Day, of course).

This action alone mimics a family in its concern. Every Legion memoir (and they are legion), however much it complains of bullying or incompetence, mentions with heartfelt gratitude the songs and traditions imbibed alongside the forced marches.

Fast forward to 1961 . . . when the FFL’s First Paratroop Regiment participated in the failed coup-d’état to overthrow French President Charles de Gaulle. The First Para’s were disbanded in the following months.

Robert Twiggers continues:

The coup attempt brought to the surface the troubled relationship between France and its Foreign Legion. The French admire it and yet don’t quite trust it.

Another re-invention was required.

This time, the solution was truly bold: to turn the Legion into an elite force, a strike force, the kind that could easily put down a coup, or stage one in another country.

The un-disgraced 2nd Parachute Regiment [who did not participate in the attempted coup in 1961] became the ‘Young Lions’ of this newly created force.

Given our rapidly evolving business climate, is it time to re-invent your company to better serve your customers’ needs?

Your thoughts?

Image courtesy of tpsdave at All-free-download.com

Article Link:

https://aeon.co/essays/why-young-men-queue-up-to-die-in-the-french-foreign-legion#

Read More

Solving The Enigma of Artificial Intelligence (AI)

As defined by technopedia.com (edited):

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:

  • Speech and image recognition
  • Learning
  • Planning
  • Problem solving

Whatis.com defines machine learning as (edited):

A type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed.

Machine learning focuses on the development of computer programs that can change when exposed to new data. 

The process of machine learning is similar to that of data mining. Both systems search through data to look for patterns.

However, instead of extracting data for human comprehension -- as is the case in data mining applications -- machine learning uses that data to detect patterns in data and adjust program actions accordingly.

Writing on theverge.com on October 10 2016, James Vincent observed that (edited):

While companies like Google are confidently pronouncing that we live in an AI age with machine learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are keen to point out that there’s still a lot of work to be done.

Just because we have digital assistants that sound like the talking computers in movies doesn’t mean we’re much closer to creating true artificial intelligence.

One problem is the lack of insight we have into how these systems work in the first place and how they reach their conclusions.

A good demonstration of this problem comes from an experiment at Virginia Tech. Researchers created what is essentially an eye tracking system which records which pixels of an image an artificial intelligence agent looks at first.

The researchers showed the artificial intelligence (AI) agent pictures of a bedroom and asked it: "What is covering the windows?"

They found that instead of looking at the windows, the AI agent looked at the floor. Then, if it found a bed, it gave the answer "there are curtains covering the windows."

This happened to be right, but only because of the limited data the network had been trained on.

Based on the pictures it had been shown, the AI agent had concluded that if it was in a bedroom there would be curtains on the windows.

So when it saw a bed, it stopped looking — it had, in its eyes, seen curtains. Logical, of course, but also daft. A lot of bedrooms don’t have curtains!

Understanding how these AI agents work is critical because otherwise decisions can be made for which no one understands the reasons.

Writing on technologyreview.com on March 14 2017, Will Knight concludes:

Explainability isn’t just important for justifying decisions. It can help prevent things from going wrong.

An image classification system that has learned to focus purely on texture for cat classification might be fooled by a furry rug. 

So offering an explanation could help researchers make their systems more robust, and help prevent those who rely on them from making mistakes.  

Your thoughts?

Image courtesy of agsandrew at FreeDigitalPhotos.net

Relevant links:

https://www.techopedia.com/definition/190/artificial-intelligence-ai

http://whatis.techtarget.com/definition/machine-learning

http://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks

https://www.technologyreview.com/s/603795/the-us-military-wants-its-autonomous-machines-to-explain-themselves/?set=603859

Read More

Reversal Thinking & Innovation - Circular Airport Runways

Process reversal can lead to innovation.

Reversal thinking essentially involves reframing a process by thinking about it ‘backwards’.

A good example is how we print documents . . . the paper moves through the stationary printer.

Alternatively, the printer could move across a stationary sheet of paper while it prints. This is exactly what ZUtA Labs did when they developed their first mini-robotic pocket printer which is about the size of a hockey puck and twice as thick.

A second example involves what is commonly known as 3-D printing or additive manufacturing, which is the reverse of subtractive manufacturing.

An example of subtractive manufacturing is when a piece of steel has portions of it removed to create a blade for a gas turbine engine. This blade can also be created through an additive process where material is added layer-by-layer (i.e., 3-D printed).

A third example involves airport design and runway layout. When planes land, they do so on runways that have been laid out to take into account the usual direction of the winds to maximize the probability of an airplane landing into headwinds.

From time to time there are crosswinds, which if severe, can cause the airport to cease operations or require airplanes to make their approaches flying almost sideways or at an angle to the runway.

In a make-believe-world it would be ideal to make movable runways so that pilots can always make their landing approaches and take-offs directly into headwinds.

One way of accomplishing this would be to build a circular runway which is 2.2 miles in diameter. Work on this concept has been in progress for years.

Katharine Schwab wrote on fastcodesign.com on March 27 2017 (edited):

Since 2012, Henk Hesselink and his team at the National Aerospace Laboratory in the Netherlands have been working on a runway design that’s circular instead of straight.

Their so-called Endless Runway Project—funded by the European Commission’s Seventh Framework Program, proposes a circular design that would enable planes to take off in the direction most advantageous for them. Namely, the direction without any crosswinds.

As Hesselink tells Co.Design, crosswinds are exactly what they sound like: winds that buffet an airplane from the side as it lands. He was inspired to create a new kind of runway while watching “scary” landing videos online, which show crosswinds in action.

When crosswinds are light, they have no impact on taking off or landing, but when they’re too strong, runways facing perpendicular to the crosswinds have to be shut down entirely—which can seriously impact not just one airport, but the entire network. It’s something that happens frequently near the ocean.

For instance, Hesselink says that the Amsterdam airport often has to switch between runways during durations of bad conditions, and in smaller cities with fewer runways, crosswinds can grind all flights to a complete halt.

But the circular runway system that Hesselink designed, with a diameter of about 2.2 miles and circumference of about 6.9 miles, can accommodate two planes landing simultaneously even when there are bad crosswinds.

That’s because there are always two areas on the ring where the crosswinds will be aligned with the direction of takeoff. In good conditions, three planes can land and take off simultaneously.

The circular runway works almost like a high-speed racetrack or roulette wheel, Hesselink says. If the circular runway were completely flat on the ground, the centrifugal forces would be too great and push the plane off the runway.

But his design is slightly banked, meaning it’s slightly raised on its outer edges to keep the plane on the runway as it gains speed.

For now, the Endless Runway remains a concept where the only testing has been within the safe confines of computer simulation. But Hesselink hopes to test the idea in real life on a racetrack with a drone.

Your thoughts?

Image: Netherlands Aerospace Centre

Relevant links:

http://www.zutalabs.com/

https://www.creativemechanisms.com/blog/additive-manufacturing-vs-subtractive-manufacturing

https://www.fastcodesign.com/90107235/why-airport-runways-should-actually-be-circular

Read More

Should You Bring Artificial Intelligence Into Your Business?

Artificial Intelligence (AI) holds great potential for most businesses since it can be used to automate many mental tasks taking less than one second of thought. Image recognition is a good example of such a task.

Such automation can be done either today or in the very near future, according to Andrew Ng who is head of global Artificial Intelligence strategy at the Chinese search company Baidu.

Ng draws an analogy between the rise of Artificial Intelligence and the introduction of electricity. Writing in Harvard Business Review in November 2016 he observed:

A hundred years ago electricity transformed countless industries; 20 years ago the internet did, too. Artificial intelligence is about to do the same.

To take advantage, companies need to understand what artificial intelligence can do and how it relates to their strategies. But how should you organize your leadership team to best prepare for this coming disruption?

A hundred years ago, electricity was really complicated. You had to choose between AC and DC power, different voltages, different levels of reliability, pricing, and so on.

And it was hard to figure out how to use electricity: Should you focus on building electric lights? Or replace your gas turbine with an electric motor?

Thus many companies hired a VP of Electricity to help them organize their efforts and make sure each function within the company was considering electricity for its own purposes or its products. As electricity matured, the role went away.

Recently, with the evolution of IT and the internet, we saw the rise of CIOs to help companies organize their information. As IT matures, it is increasingly becoming the CEO’s role to develop their companies’ internet strategy.

Indeed, many S&P 500 companies wish they had developed their internet strategy earlier. Those that did now have an advantage. Five years from now, we will be saying the same about AI strategy.

Ng recommends hiring a Chief AI Officer (CAIO) so that Artificial Intelligence gets applied across all divisions of your company. A CIAO should have the following skills:

Good technical understanding of AI and data infrastructure. In the AI era, data infrastructure — how you organize your company’s databases and make sure all the relevant data is stored securely and accessibly — is important.

Ability to work cross-functionally. AI itself is not a product or a business. Rather, it is a foundational technology that can help existing lines of business and create new products or lines of business.

Strong intrapreneurial skills. AI creates opportunities to build new products, from self-driving cars to speakers you can talk to, that just a few years ago would not have been economical.

A leader who can manage intrapreneural initiatives will increase your odds of successfully creating such innovations for your industry.

Ability to attract and retain AI talent. This talent is highly sought after. Among new college graduates, I see a clear difference in the salaries of students who specialized in AI.

A good Chief AI Officer needs to know how to retain talent, for instance by emphasizing interesting projects and offering team members the chance to continue to build their skill set.

Your thoughts?

Image courtesy of NicoEINino at FreeDigitalPhotos.net

Relevant links:

https://hbr.org/2016/11/hiring-your-first-chief-ai-officer

https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now

Read More

Exponential Technologies – What Are They?

Peter Diamandis is an American engineer, physician and entrepreneur who co-founded Singularity University, a Silicon Valley think tank providing educational Artificial Intelligenceprograms as well as running a business incubator. 

The university focuses on scientific progress and the development of ‘exponential’ technologies such as artificial intelligence, robotics and virtual reality.

The incubator encourages application of these technologies in various fields such as data science, digital biology, medicine and self-driving vehicles.

In his primer on exponential technologies, Peter Diamandis writes (edited):

For a technology to be ‘exponential, its power and/or speed doubles each year, and/or the cost drops by half.

They are technologies which are rapidly accelerating and shaping major industries and all aspects of our lives.

Diamandis constructed a framework for summarizing the characteristics of exponential technologies. These characteristics are interrelated.

He calls these characteristics the 6 D’s. Here is a summary (edited) and explanation as presented by Vanessa Bates Ramirez writing on SingularityHub.com on November 22 2016:

1. Digitized – it can be programmed

“Anything digitized enters the same exponential growth we see in computing.

Digital information is easy to access, share and distribute. It can be spread at the speed of the internet.

Once something can be represented in ones and zeros – from music to biotechnology – it becomes an information based technology and enters exponential growth.”

2. Deceptive – it is initially slow in developing

“When something starts being digitized, its initial period of growth is deceptive because exponential trends do not seem to grow very fast.

Doubling .01 only gets you .02, then .04, and so on. Exponential growth really takes off after it breaks the whole number barrier.

Then 2 quickly becomes 32, which becomes 32,000 before you know it.”

As an example, artificial intelligence had its origins in research conducted during the Second World War (1939 to 1945) but did not demonstrate its true potential until more than 50 years later in 1997 when IBM’s supercomputer ‘Deep Blue’ defeated world-champion chess player Garry Kasparov.

3. Disruptive – it is more effective and cheaper than what it replaces

“The existing product for a market or service is disrupted by the new market the exponential technology creates because digital technologies outperform in effectiveness and cost.

Once you can stream music on your phone, why buy CDs?

If you can also snap, store and share photographs, why buy a camera and film?”

4. Dematerialized – take something that is physical and re-create it digitally.

“Separate physical products are removed from the equation.

Technologies that were once bulky or expensive – radio, camera, GPS, video, phones, maps – are all now in a smart phone that fits in your pocket.”

As an example, the Sony Walkman, a portable cassette tape player introduced in 1979, allowed people to carry their music with them. Now the same end is accomplished via the iPhone and digitized music.

5. Demonetized – becoming cheaper

“Money is increasingly removed from the equation as the technology becomes cheaper, often to the point of being free.

Software is less expensive to produce than hardware and copies are virtually free.

You can now download any number of apps on your phone to access terabytes of information and enjoy a multitude of services at costs approaching zero.”

6. Democratized – available to everyone, not just the wealthy

“Once something is digitized, more people have access to it. Powerful technologies are no longer only for governments, large organizations or the wealthy.”

If you can buy a cheap phone with an internet connection, you have the same communications capabilities and access to the same platforms as a billionaire.

I think the 6 characteristic D’s of exponential technologies can be summarized even further as:

1. A digitized form of a previous technology
2. Accelerating development of improvements following a slow start
3. More effective and cheaper than what it is displacing and therefore available to everyone

Your thoughts?

Image courtesy of akindo at FreeDigitalPhotos.net

Relevant links:

https://su.org/concepts/

https://singularityhub.com/wp-content/uploads/2016/11/6Ds-Infographic-v2-2.jpg

http://www.bbc.co.uk/timelines/zq376fr#zw376fr

Read More

Meet ‘Flippy’ – CaliBurger’s Robot Hamburger Cook

CaliBurger is a California-based hamburger restaurant chain similar to Five Guys, In-N-Out and Shake Shack. It positions itself as a tech company that also sells hamburgers.

While in the restaurant, customers can play games such as GemJump and Minecraft and see the results of interactive in-house gaming amongst its customers displayed on a huge video wall.

Currently, CaliBurger has restaurants in 13 countries including China, Saudi Arabia, Taiwan and Sweden.

Automation of some jobs is the next step for CaliBurger. Line cooks are the target.

Writing on singularityhub.com, Vanessa Bates Ramirez provides details (edited version):

CaliBurger has partnered with a company called Miso Robotics and developed ‘Flippy’, a robotic kitchen assistant, and recently installed one in their Pasadena, California location.

Flippy the bot is more than just an assembly line robot requiring an organized work space with ingredients being precisely positioned for it to cook hamburgers.

Flippy incorporates the latest machine learning and artificial intelligence software to locate and identify all things that are in its workspace and to learn from its experience through a constant feedback loop.

The bot consists of a cart on wheels with a single six-axis arm providing full range of motion allowing it to perform multiple functions.

It has an assortment of tools such as spatulas, scrapers and tongs which it can change by itself, depending on the task.

Some of the bot’s key tasks include pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.

Sensors on the grill-facing side of the bot take in thermal and 3D data, and multiple cameras help Flippy ‘see’ its surroundings. The bot knows how many burgers it should be cooking at any given time through a system that digitally sends tickets back to the kitchen from the restaurant’s counter.

Nevertheless, a human is required to finish the burger. Flippy alerts human cooks when it’s time to put cheese on a grilling patty. A human is also needed to add sauce and toppings once the patty is cooked, as well as wrap the burgers that are ready to eat.

Two of the bot’s most appealing features for restaurateurs are its compactness and adaptability—it can be installed in front of or next to any standard grill or fryer, which means restaurants can start using Flippy without having to expand or reconfigure their kitchens.

Because this bot ‘machine learns’, it can also learn to prepare other foods on the menu.

According to the Bureau of Labour Statistics, there were 2.3 million chefs in 2014 in the United States; line cooks are included in this figure.

Flippy takes care of jobs around the grill that are repetitive and dangerous due to the possibility of cuts or burns.

I believe many line cooks operating in a repetitive-task environment can and will be replaced by automation. Bots like Flippy are more reliable than humans, can work longer shifts, provide a uniform product and never call in sick. Nor are there any personnel issues.

The argument has been made that destruction of one job will lead to the creation of another job; in the case of robots like Flippy, new tech jobs will certainly be created to manufacture and maintain these devices.

These new jobs require higher levels of technical expertise, things that line cooks cannot be easily re-trained to do.

The prospects for people losing jobs through automation, are not good, particularly for those whose entire skill set has been replaced by an ‘intelligent’ bot.

Your thoughts?

Image courtesy of chiarito at FreeDigitalPhotos.net

Relevant links:

https://singularityhub.com/2017/03/08/new-burger-robot-will-take-command-of-the-grill-in-50-fast-food-restaurants/?utm_source=Singularity+Hub+Newsletter&utm_campaign=8aef49b2f5-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-8aef49b2f5-58188205

http://canadianrestaurantnews.com/canada/latest-news/caliburger-combines-burgers-with-interactive-gaming

https://www.bls.gov/ooh/food-preparation-and-serving/cooks.htm

http://www.geekwire.com/2015/inside-caliburger-new-in-n-out-like-burger-shop-lets-people-play-minecraft-against-each-other/

https://caliburger.com/

Read More

The Importance of Recess

The Centers for Disease Control and Prevention defines recess as “regularly scheduled periods within the elementary school day for unstructured physical activity and play.”

In elementary school, my academic day started at 9:00 AM and ended at either 3:30 or 4:00 PM, depending on whether you had misbehaved and had to stay until 4:00 as punishment.

There was a 15 minute recess in the morning as well as in the afternoon; lunch time was 90 minutes long from noon until 1:30 PM with almost all students walking or bicycling home for lunch. Most of us were back in the school yard by 1:00 PM, playing whatever games we wanted.

I liked school and was interested in all subjects; however, towards the end of each classroom study segment I looked forward to either recess or going home for lunch and then doing something involving physical activity.

It seems that these days less and less time is being allocated by elementary schools to unstructured free time with more time being allotted to academic pursuits.

My personal experience suggests that this might not be the best way to run an elementary school to achieve optimal learning conditions.

Writing in theatlantic.com in December 2016, Alia Wong explains (edited version):

In Florida, a coalition of parents known as “the recess moms” has been fighting to pass legislation guaranteeing the state’s elementary-school students at least 20 minutes of daily free play. Similar legislation recently passed in New Jersey, only to be vetoed by the governor, who deemed it “stupid.”

When, you might ask, did recess become such a radical proposal? In a survey of school-district administrators, roughly a third said their districts had reduced outdoor play in the early 2000s.

Likely culprits include concerns about bullying and the No Child Left Behind Act, whose time-consuming requirements resulted in cuts to play.

The benefits of recess might seem obvious—time to run around helps kids stay fit. But a large body of research suggests that it also boosts cognition.

Many studies have found that regular exercise improves mental function and academic performance.

And an analysis of studies that focused specifically on recess found positive associations between physical activity and the ability to concentrate in class.

Preliminary results from an ongoing study in Texas suggest that elementary-school children who are given four 15-minute recesses a day are significantly more empathetic toward their peers than are kids who don’t get recess.

Perhaps most important, recess allows children to design their own games, to test their abilities, to role-play, and to mediate their own conflicts—activities that are key to developing social skills and navigating complicated situations.

I agree, especially with Alia Wong’s last comment.

As an elementary school pupil I remember playing pick-up soccer and baseball during recess and lunch time.

Without any adult supervision, we settled disputes amongst ourselves and renegotiated rules as required.

We were masters of both our physical space and our relationships with one another, if only for a short time.

Fast forward to today . . .

Some elementary schools have set up stationary bicycles with desks attached to the handle bars. Students may use them to do their work when they feel they are not able to sit still and concentrate on their academic studies. It seems that simultaneous physical activity helps them focus on their work.

The results, so far, appear to be promising. However, as with most new innovations or practices, only time will tell if studying while peddling a stationary bicycle consistently aids learning.

Or are stationary bicycles just a passing fad that students and teachers want to believe helps with cognitive functioning?

Your thoughts?

Image courtesy of dolgachov at FreeDigitalPhotos.net

Relevant links:

http://pediatrics.aappublications.org/content/131/1/183

https://www.theatlantic.com/magazine/archive/2016/12/why-kids-need-recess/505850/

http://globalnews.ca/news/3187541/saskatoon-teachers-use-stationary-bikes-to-help-students-concentrate/

Read More

The Appeal of The Physical in Our Digital Age

The digital age we live in has changed the way we listen to music, capture or record images, and read. As new digital technologies developed, old technologies were swept aside with many believing it was only a matter of time before they ceased to be used, being remembered only as museum curiosities.

This has not happened.

In fact, vinyl records and film photography are experiencing a renaissance, and paper books are still being sold and read.

Christian Jarrett in his article on “The psychology of stuff and things” in The Psychologist magazine explains:

More than mere tools, luxuries or junk, our possessions become extensions of the self. We use them to signal to ourselves, and others, who we want to be and where we want to belong. And long after we’re gone, they become our legacy. Some might even say our essence lives on in what once we made or owned.

I doubt if many people, upon inheriting old digital files, would view them as a legacy item by which to remember someone.

Digital images, words, and sounds, which have no physical manifestation, which can be instantly uploaded or deleted, may not be considered “real”.

Perhaps people are now seeking “real” things in their physical world to complement their digital world . . . something they can touch, see and smell.

Think about vinyl records.

In 2006, only 900,000 new vinyl records were sold in the United States; in 2015 new record sales had increased to 12 million which is an increase of more than 30% per year. And sales are not just to older people who used vinyl records in their youth and might now be making vinyl purchases for nostalgic reasons; young digital natives who never experienced vinyl records are also buying them.

As David Sax explains in his new book “The Revenge of Analogue”

Records are large and heavy; require money, effort, and taste to create and buy and play; and cry out to be thumbed over and examined. Because consumers spend money to acquire them, they gain a genuine sense of ownership over the music, which translates into pride.

Film photography is also on the upswing across all age groups, producing a physical record (i.e., a negative) that can be printed on paper or scanned into a digital file . . . the point being that you have a physical manifestation of the image you captured, which, if stored properly, will be usable for a very long time . . . perhaps more than a hundred years under ideal conditions.

What happens to JPEG files as storage technology evolves and the ability to open these files is no longer available. Should you have any, how many of you have the ability to retrieve data from your old 8” floppy disks. (Hint: you can still find these drives on eBay starting at about $250 . . . but what about the software that was used to write the files originally? Not so easy, is it?)

And finally, about reading and the paper book. Pew Research reports that two-thirds of Americans read a paper print book in 2016 . . . about the same figure as in the preceding four years. Only about a quarter read an e-book in the same period.

I believe we are not ready to entirely abandon our physical media and totally embrace a digital world. While the digital world is here to stay, there will still be a market for outdated technologies.

Your thoughts?

Image courtesy of dan at FreeDigitalPhotos.net

Relevant links:

https://thepsychologist.bps.org.uk/volume-26/edition-8/psychology-stuff-and-things

https://www.amazon.ca/Revenge-Analog-Real-Things-Matter/dp/1610395719

http://www.pewinternet.org/2016/09/01/book-reading-2016/

Read More

Kodak Invented The Technology That Destroyed It

Many people continue to believe that Kodak sat by idly as the digital camera destroyed its film business. This was not the case.

Kodak was very active in the research and development of digital imaging technology.

Writing in the July 2016 issue of Harvard Business Review, Scott Anthony points out that:

“The first prototype of a digital camera was created in 1975 by Steve Sasson, an engineer working for … Kodak. The camera was as big as a toaster, took 20 seconds to take an image, had low quality, and required complicated connections to a television to view, but it clearly had massive disruptive potential.”

David Usborne writing on independent.co.uk observed that:

“A vice-president left [Kodak] in 1993 because even then he couldn't persuade it to manufacture and market a digital camera. ‘We developed the world's first consumer digital camera but we could not get approval to launch or sell it because of fear of the effects on the film market."

To a degree this was understandable since the profit on film was 70 cents on the dollar; such margins could not be achieved with digital cameras.

Then, in 1994 Apple launched ‘Quicktake’ one of the first digital consumer cameras. Apple did not manufacture it . . . Kodak did!

Meanwhile, Kodak continued to design and manufacture high end digital cameras and other imaging equipment, not realizing the mass market potential for consumer digital cameras.

According to Wikipedia (edited):

In 1999 Kodak had a 27% market-leading share in digital camera sales.

In 2001 Kodak held the No. 2 spot in U.S. digital camera sales (behind Sony)  but it lost $60 on every camera sold.

By 2010 it held 7% share, in seventh place behind Canon, Sony, Nikon and others.

Despite the high growth, Kodak failed to anticipate how fast digital cameras became commodities, with low profit margins, as more companies entered the market in the mid-2000s.

Kodak’s digital cameras soon became undercut by Asian competitors that could produce their offerings more cheaply.

Now an ever-smaller percentage of digital pictures are being taken on dedicated digital cameras, being gradually displaced in the late 2000s by cameras on cellphones, smartphones and tablets. 

So you see, Kodak was not blind to the digital revolution but actually participated in it. Trying to maintain its film business prevented the company from a more aggressive move into the consumer digital camera arena.

Your thoughts?

Image courtesy of bpablo at FreeDigitalPhotos.net

Relevant links:

https://hbr.org/2016/07/kodaks-downfall-wasnt-about-technology

http://www.independent.co.uk/news/business/analysis-and-features/the-moment-it-all-went-wrong-for-kodak-6292212.html

https://en.wikipedia.org/wiki/Kodak#Shift_to_digital

http://sloanreview.mit.edu/article/the-real-lessons-from-kodaks-decline/

Read More

Smart Phone Addiction – Going “Cold Turkey”

Writing on theguardian.com on February 11 2016, Jenna Woginrich describes life after getting rid of her mobile communication device 18 months ago. Here is an edited excerpt:

The phone rings: it’s my friend checking to see if I can pick her up on the way to a dinner party. I ask her where she is and as she explains, I reach as far as I can across the countertop for a pen.

I scribble the address in my trusty notebook I keep in my back pocket. I tell her I’ll be at her place in about 20 minutes. Then I hang up. Literally.

I take the handset receiver away from my ear and hang it on the weight-triggered click switch that cuts off my landline’s dial tone.

I take my laptop, Google the address, add better directions to my notes and head outside and drive over. If I get lost on the way, I’ll need to ask someone for directions. If she changes her plans, she won’t be able to tell me or cancel at a moment’s notice. If I crash on the way, I won’t be calling 911.

I’m fine with all of this. As you guessed by now, I haven’t had a cellphone for more than 18 months.

I didn’t just cancel cellular service and keep the smartphone for Wi-Fi fun, nor did I downgrade to a flip phone to “simplify”; I opted out entirely. There is no mobile phone in my life, in any form, at all.

Arguably, there should be. I’m a freelance writer and graphic designer with many reasons to have a little computer in my holster, but I don’t miss it. There are a dozen ways to contact me between email and social media. When I check in, it’s on my terms.

“My phone” has become “the phone”. It’s no longer my personal assistant; it has reverted to being a piece of furniture – like “the fridge” or “the couch”, two other items you wouldn’t carry around with you.

I didn’t get rid of it for some hipster-inspired luddite ideal or because I couldn’t afford it. I cut myself off because my life is better without a cellphone.

I’m less distracted and less accessible, two things I didn’t realize were far more important than instantly knowing how many movies Kevin Kline’s been in since 2010 at a moment’s notice. I can’t be bothered unless I choose to be. It makes a woman feel rich.

When friends found out, I was told it was as insane a decision as leaving a rent-controlled apartment.

But I was tired of my world existing through a black screen and even more tired of being contacted whenever anyone (or any bot) felt like it.

I was constantly checking emails and social media, or playing games. When I found out I could download audiobooks, the earbuds never left my lobes. I was a hard user. I loved every second of it.

I even slept with my phone by my side. It was what I fell asleep watching, and it was the alarm that woke me up. It was never turned off.

It got so bad that I grew uncomfortable with any 30-second span of hands-free idleness. I felt obligated to reply to every Facebook comment, text, tweet, and game request.

As an author I wrote it all off as reader interaction, free publicity and important grassroots marketing. These were the justifications of a junkie; I was an addict at risk of losing myself completely

I made the decision to break up with my device and I did it “cold turkey”.

I’ve been clean a year and a half now, and I’m doing fine. I get plenty of work, I don’t miss invitations, and I’m no longer scared of my own thoughts.

I got a landline and I got more sleep. I look people in the eye. I eat food instead of photographing it. My business, social life, and personal safety have not evaporated overnight either.

Turns out a basic internet connection and laptop is plenty of connectivity to keep friends informed, weekends fun and trains running on time. And while I might be missing out on being able to call 911 at any moment, it’s worth the sacrifice to me.

I’m glad to be back in the world again. It beats waiting for the notification alert telling me that I exist.

Your thoughts?

Image courtesy of Georgijevic at FreeDigitalPhotos.net

Relevant links:  

https://www.theguardian.com/technology/2016/feb/11/smartphone-technology-addiction-facebook-twitter

https://well.blogs.nytimes.com/2015/07/06/screen-addiction-is-taking-a-toll-on-children/?_r=0

Read More

Reasons for Sending Handwritten Notes and Letters

With the domination of the Internet and social media as a communication medium, the art of handwritten letters and notes delivered by snail-mail, seems to have taken a back seat to instantaneous electronic communications.

Whenever I open my mailbox and I see an envelope that has been addressed by hand, I am more likely to open it first. Usually it contains a personal communication from a friend or relative.

Can this approach be taken for more effective business communications?

The answer is “yes”.

Writing on americanexpress.com, Carla Turchetti made the following points in support of the handwritten note or card:

Use handwritten notes to reach out to prospective clients and to say thank you to vendors and clients. Email is too easy to ignore. Phone calls can be invasive and are more challenging to schedule. Letters are hard to ignore and not invasive.

Taking the time to write something by hand makes the recipient feel special.

Handwritten notes can be more convincing and powerful than the actual message.

Handwritten notes remind us to slow down and take note … of our surroundings, our customers, and our community and clients

On a personal level as well there are good reasons for sending handwritten notes and letters.

Writing on huffingtonpost.com on May 15 2015, Traci Bild provided several reasons.

“1. A Lifetime Keepsake: Personal handwritten notes grow rarer by the day. According to the U.S. Postal Service’s annual survey, the average home only received a personal letter once every seven weeks in 2010, down from once every two weeks in 1987. In a world where people seem to have everything, words on paper, sealed with a stamp, can be far more valuable than any material item purchased.

2. Your Heart on Paper: In a wired world — where emails, tweets and text messages are more accessible than handwritten notes — there is something magical about reading words written in longhand.

3. The Ultimate Surprise: Let’s be honest: How do you feel when someone handwrites you a note? Imagine the person you write walking to their mailbox, opening it and finding a letter inscribed to them from you. It will be the best part of their day!

4. A Feeling of Importance: What people want more than anything is to feel validated and to know they matter. Your handwritten letter will send a clear message: You are important and you do matter to me.

5. It’s Fun! Purchase beautiful stationary that reflects your personality, buy interesting stamps and try out a sealing wax stamp to secure the envelope. I have a butterfly and a heart and it’s like putting a cherry on top!

6. No Regrets: How many times have you missed the opportunity to say what needed to be said, only to find it was too late? Make a point of letting people you care about, who have influenced and shaped your life know how you feel.”

And one final point. You should keep the handwritten cards and letters you receive.

I have letters my parents wrote to each other during World War II when they were separated for over four years. Reading these letters today allows me to reconnect with them . . . they passed away over 20 years ago.

Your thoughts?

Image courtesy of Eerik at FreeDigitalPhotos.net

Relevant links:

https://www.americanexpress.com/us/small-business/openforum/articles/the-forgotten-power-of-handwritten-notes/

http://www.huffingtonpost.com/traci-bild/5-reasons-to-write-a-handwritten-letter-now-not-later_b_7284236.html

Read More