Denning Branch International Home Page is

Click on the "X" at the top right hand corner to close this window.

News archives by date.

ROSENFIELD (Posted August 22, 2005.)

Rosenfield is an important consultant to the nursing home industry in the USA. It concentrates on ensuring that the arcitectural design and management of operations of nursing homes for the aged is guranteed to provide the very best environment, care and quality of living possible for the residents.

Rosenfield's founder Zachary Rosenfield developed what has become known as Cultural Pioneering while designing hospitals and medical buildings with his father, (one time city planner at New York City), in the 1960's, and which has been case studied by MIT.

Denning Branch International, with academic researchers from the University of Rochester, has undertaken the development with Rosenfield of a major new managment tool called TOPS (The Optimum Performance System) that allows absolute best results, and sustainable continuous improvement, while acknowledging the finite resources available to managers.

The first trials of the new system are planned for this year, with over 20 nusring homes in the New Jersey, New York area vying for participation in the first round, which will gather the peer review data necessary for TOPS's data base.

ROYAL LAUNCESTON SHOW (Posted June 22, 2005.)

Following the turnaround of the Royal National Agricultural and Pastoral Society of Tasmania by Allan Branch of Denning Branch International, which was formally concluded on June 10 with the hand over of operations to the newly installed and trained Board of Directors and executive management team, the Society is poised to present a spectacular 2005 Royal Launceston Show.

Part of the challenge faced in the turnaround had been to re-invent what an agricultural show is in today's world. The old adage of "country comes to the city" no longer applies in a world of instant communications, efficient transport, universal education and wide-spread electronic entertainment. There is little that the city offers to country people that they do not already receive, and little that city people can experience from a rural perspective that advances their knowledge or awareness of the world. With this in mind, the whole rationale of a Show has been re-thought through from basic principles.

Consequently, RNAPS intends presenting a range of entertainment and attractions that will see the Show as a premier feature event for the whole community. Rather than concentrate on just livestock and rural competitions, or just the old side show alley type of entertainment (which is of course still a key part of the atmosphere of an agricultural show), this year cultural and community activities, competitions and exhibitions, and greater commercial trade presentations will feature prominently. Art competitions, gate prizes, school projects with collectables and activities, advanced technology, and heritage and history are part of the fare.

Best practice market analysis and matrix studies have been employeed to differentiate and position the Show against formidable alternative events. State-of-the-art branding and promotional practices, particularly segmented marketing, have been used to present the event to the various parts of the community, all of whom are potential customers. Once again, everyone will want to come to the Show, but for different reaons, and no one will leave feeling there was not value for money, which was the largest complaint from the past as indicated by market surveys. (The lesson? Listen to your market and be your own best competition.)

WAVERLEY AUSTRALIA (Posted July 1, 2005.)

Allan Branch of Denning Branch International conducted a recent analysis of the Waverley Woollen Mills, the last of its kind in Australia, to determine the reason for its demise and entry into involuntary receivership. Although the analysis produced a formula for instant recovery, the result was not taken up by the Receiver and other stakeholders in this 130 year old operation, and the company will unfortunately either be wound up or sold off.

The analysis showed a classic case of a company's inability to transition from Stage 2 to Stage 3 of its life cycle, meaning that the formalization of structure instigated over the last decade by the encumbent management did not move to an appropriate delegation of functions indicated by the structure and typical for a medium sized operation. The result of this text-book case was gradual increase in inefficiency, poor decision making caused by limited analysis behind decision making, specially over capital spending, and ineffective marketing strategies. The need to amortise the expenses of mistakes and the resultant decrease in profits created the decline. (The lesson? Don't keep certain management styles beyond their "use-by" date.)

ROYAL LAUNCESTON SHOW (Posted May 1, 2005.)

Allan Branch of Denning Branch International is conducting a turnaround of the Royal National Agricultural and Pastoral Society of Tasmania. The Society, which among other events conducts the 133 year old annual Royal Launceston Show, has exited Voluntary Administration and is gearing up to present a show this year which will be seen as a turning point in the history of the Society.

Central to the turnaround was an analysis that determined that over two decades, the Society lacked the benefit of high end executive management talent to plan and deliver competitive growth strategies. The result was strength in general management of funds and resources, but overall decline and absense of relevence for today's community needs.

Once this definitive problem had been identified, the fix was easy and immediate: introduce experienced corporate governance, hire new executive management, retraining of staff, internal organisation design, rigorous allocation of tasks and roles, competitor analysis, positive promotional branding, and expansion to a full annual calendar of exciting events instead of being a "one product wonder".

On April 28, 2005, the Society exited Administration with no debts, all its assets intact, with more cash than when it entered Administration, and with a totally restructured internal organisation.

The organization faced challenges in recent years that forced it to dramatically rethink not only its operations and delivery of services, but its whole raison d'être. Today RNAPS is a vibrant and energetic organization that serves as a model of excellence for organizations of its type Australia-wide.

ROYAL LAUNCESTON SHOW (Posted April 1, 2005.)

Allan Branch of Denning Branch International is conducting a turnaround of the Royal National Agricultural and Pastoral Society of Tasmania. The Society, which among other events, conducts the annual Royal Launceston Show is in Voluntary Administration.

The Royal National Agricultural and Pastoral Society of Tasmania (RNAPS), is a non-profit, publicly listed company, registered in Tasmania, Australia and limited by guarantee. Dating back to 1833 the Society has held the largest regional agricultural show in Tasmania and the oldest regional show in the country, called since 1984 The Royal Launceston Show. This makes it one of only three regional shows with Royal decree in the nation (the others are Bathurst and Toowoomba); all other Royal shows are held in the state capital cities and one on Norfolk Island. It is also illustrative of Tasmania's unique and dominant position, because of its temperate climate, as the largest pro-rata producer of agricultural products in the nation, some 7% of the national total from 2% of the population. Because of Tasmania's unique demographics, having the most highly decentralised population of all the states, the Royal Launceston Show serves about half of the state population.

The Royal Launceston Show has historical significance and wide community support. For many years the event was conducted on property owned by the Society at Elphin in Launceston. In 1996 the Elphin site was sold and the Show event relocated to Inveresk as part of the Better Cities initiative sponsored by the Launceston City Council, the Tasmania State Government and the Australian Federal Government.

Since that time the Society has had difficulty in conducting a cost effective event due to a lower level of infrastructure compared with that at Elphin. The buildings available to conduct the event on the leasehold site at Inveresk have been fewer in number, and provide less floor space than at Elphin. This will be further compounded in the future due to the Launceston City Council's decision to sell the Exhibition Building at Inveresk. This building has previously been used by the Society to house a significant part of the event.

It has been necessary for substantial temporary installations such as an electrical distribution substation, and also toilet facilities, to be installed at Inveresk each year. These facilities were to have been provided on a permanent basis at Inveresk as part of the agreement for the Society to relocate the show event to that site. Unfortunately these facilities have never been installed. The Society has incurred an additional average annual cost over the past seven years in respect of these facilities, which has created financial stress.

On 2 December 2004 the Society was placed in voluntary administration and Mr. Steven Hernyk of Deloitte Touche Tohmatsu was appointed Administrator. That decision was taken by Directors due to cash flow difficulties experienced following a delay in receipt of a significant refund of GST overpaid in relation to prior year activities, and which ultimately was less than half of that expected. The overall financial position had also deteriorated since moving to Inveresk as noted above.

Nevertheless, the Society's agricultural show and other events operate as profitable activities by themselves, and the company has a positive net worth, so the expectation is to soon exit Administration under a Deed of Company Arrangement (DoCA) and to return to normal business with a strong and innovative business plan, a revitalised board and with a restructured internal organisation.

(Posted April 1, 2005)


The 15 Year Cycle of Robotics

“Every single thing we buy or use is inferior to something else that has been invented or developed and never made it to the market.” This is a rule of thumb that I reiterate in my marketing sessions when turning around failed technology companies. It leads to several conclusions. For one thing, it means that better or best technology does not automatically determine commercial success nor technological progress. The former is important for the simple fact of the economies of technology, it takes money, and without it, technology development slows. The latter suggests that society does not always progress in the best direction, or that it does not progress as rapidly as it could. More importantly, there is another conclusion, it means that there is something else other than the inventiveness and genius behind technology development that determines its efficacy and influence. Any successful entrepreneur knows that. In the real world pragmatic things like, finance, personalities, market demands, contacts and such, really do play a big role in the way our society progresses, or fails to progress.

Technology is an interesting thing. I once read that there are three complete Atlas rockets still around. All the bits and pieces of those early rockets that helped take mankind to the moon. But there is no possibility of ever launching them, if that were ever to be desired, because the knowledge to do so has gone. It resided in the brains of individuals, the user manuals and procedure manuals notwithstanding, and those people have retired, or moved to different jobs, or passed away. This means that technology is really in the brains of people. When they die out, like in the case of the Incas, the technology fades. So we can no longer understand the Inca writings, even though they are their own specification manuals.

This leads me to my experience with mobile robotics and the recent comment by Rodney Brooks, Director of the AI laboratory at MIT, that in another 15 years robots will be everywhere.

At a conference in Boston about a decade ago, I heard my friend Joe Engelberger, the “Father of Robotics”, in his keynote address, admonishing the audience, mostly academics, that they had contributed little to the progress of robotics. I cringed and the room went quiet. Joe was close to retiring from a field he co-pioneered in around 1960 when he invented the industrial robot or “robotic arm” with George Devol, and it seemed to me that he was venting a lifetime of frustration. He too, even back then, was decrying the lack of fecundity with robotics in the world. While I was embarrassed at the time, today I understand completely. Let me explain.

In 1979 I invented and released a mobile robot product called the Tasman Turtle. It was a significant technical achievement for the time, and was an instant commercial success. Until then there had been perhaps only four robots in the whole world, each a pioneering research robot.

The first was Grey Walter’s set of cybernetic mice or turtles in England in the early 1950’s, little self-contained thermionic-tube driven vehicles that demonstrated reflexes like attraction to, or avoidance of, light. They retired to a small safe hutch as their response when lights were turned on or off in their environments. Those historical devices have now retired to Queensland Australia.

15 years later, after the advent of transistors, appeared the Johns Hopkins Beast, the Stanford Cart and Shakey. All were larger, more sophisticated, with better sensors examining the world, and able to do about the same as Walter’s, except not as fast. The Beast traveled up and down corridors to look for power outlets so it could recharge when its batteries got low. That was its life. The Cart used clever stereo vision to identify objects it needed to negotiate in order to reach a destination, taking a half day to travel 20 yards, stopping for an hour or so at yard intervals to process the huge amounts of visual data with its tiny computer, (by today’s standards). Shakey implemented the world’s first task-oriented algorithms to move objects around in ways it worked out for itself, in order to achieve an outcome such as stacking blocks on top of each other, also stopping sometimes for a day while new software was loaded for the next phase of the activity.

Another 15 years later was when I released the Tasman Turtle, named after its place of birth in Tasmania. Just ahead of the Tasman came the Terrapin and the General turtles, one too simple and one too complex, and neither successful for their applications. After the Tasman came the Valiant, another Brit, which is still around, but Tasman was the first successful mobile robot and there are some lessons in that.

The Tasman was used in education to teach students mathematics, algebra, geometry, computer programming, computer science and problem solving skills in general. In the classroom it used a special computer language called Logo, developed by Seymour Papert at the Media Lab at MIT, which was actually a complex and complete language, structural in style like Pascal, with recursion and procedures, and with the precursors of what are now called objects, but presented in a way that even very young students could relate to and use. The power of programming on a computer and seeing the results in the actions of a robot gave instant motivation and learning feedback to the user, and indeed to the observing teacher. Kids would not leave school because they wanted to play with the Tasman, and as long as they played, they learned. Not like a similar phenomenon today where kids play for hours with video games with little educational value.

Even though it was tethered to the computer, the Tasman had speech recognition (one could command it by talking to it), speech synthesis (it would talk back to you), a digital compass for orientation, touch sensors for interaction with its world, a pen in its belly so that it could trace a path of where it traveled, and could be programmed to draw a square say, or a triangle, or a circle (learning about right angles, the internal angle rule for triangles and integrals respectively). But it could map out the environment in a simplistic way from touching the walls, it could wander with purpose from this map, and amazingly it could learn. One demonstration program had the Tasman illustrate Pavlovian or classical conditioning by learning to avoid bumping into objects from warnings given to it when tapped on the back (the rear touch sensor). So it could do roughly the same as the earlier robots, in real time and for a few hundred dollars. It had to be for a few hundred dollars because the market it was designed for, the education market, is extremely cost-sensitive. Numerous hobby robots from the early 1980s like Elami, Chester, Topo and Omnibot derive directly from the Tasman.

Another 15 years and I completed a production prototype of a very different type of robot, a smart autonomous household vacuum cleaner. It was called D’Entrecasteaux and developed in France for Moulinex between 1991 and 1993, at the time an important international appliance manufacturer. This machine was controlled by one switch, “on/off”. When set down on the floor of a house, it immediately mapped the room around it from an intelligent rotating radar-like sonar sensor, keeping the map current every second or so. People and pets moving around were tracked, and furniture being moved was recognized and registered. Stairs were detected and avoided. Then D’Entrecasteaux planned the best way to systematically traverse the room in intelligent parallel paths much like a human would, in order to clean all of the exposed surface. This path was also determined dynamically, recalculated about 10 times a second. If an item of furniture were moved after it had cleaned around it, it would go back and do that area. If it encountered a person standing in the way, it dealt intelligently with that, if an area was particularly dirty it would progress slower or even double back to do it again, and if its batteries got low it would go back to the starting position to recharge its batteries, then come back to the exact spot to continue. This was a very smart machine, and amazingly in the early 1990’s it could be manufactured in mass for an added technology cost of under $100.

The new technology for D’Entrecasteaux was developed from 1984 to 1987 under Australian Federal and State Government research grants of around $120,000 also in Hobart, Tasmania, (based on earlier work by Jim Crowley at Carnegie Mellon University in Pittsburgh at the time under an industrial alliance contract to Commodore Business Machines in Dallas Texas where I was CTO). A preliminary prototype called Florbot was prepared for General Electric Plastics in Pittsfield, MA in 1989, released at the 1990 Domotechnica trade show in Cologne, Germany. For GE it was a marketing exercise only, and Moulinex never had the financial capacity to release theirs.

There were no huge defense contracts, no large investor funds. It was the history of building sophisticated functionality for cost-sensitive markets in education and toys, combined with the need to be commercially successful to survive, that created the ability to eventually do the same in industrial and other serious robotics.

Now back to my point. The foregoing appears to identify (approximately) an intriguing 15 year rule for robotics: 1950 (Grey Walter), 1965 (Shakey and co.), 1979 (Tasman Turtle), 1991 (D’Entrecasteaux), 2004 (Rodney Brooks), where every 15 years or so, it is predicted that robots will be everywhere in another 15 years. Yet it has never happened. The question is why, and the answer has two parts.

The first part of the answer stems back to my introduction and includes the factors that apply to any technology, not just robotics. They are the resources, motivation, and all of the human, commercial and product aspects of technology development. That is why it is more than genius that determines the successful take-up and proliferation of a technology. Technology is in the brains of people and people are unpredictable. Someone with greater motivation can overcome a lack of resources for example. Great technology with no attention to the demand specifications of a market can produce laboratory toys that no one buys, people in positions of power and prestige often direct developments off-track towards their own agendas, and elegant solutions are usually superior to brute-force approaches that produce over-engineered and therefore costly products which amaze only the developers. There are examples of all of these in the history of robotics, some identified already.

What is far more interesting is the other set of reasons why the robots are not here yet, and these are specific to robotics. They are two. Firstly robotics is extraordinarily difficult. A robot is essentially an artificial person. The difficulty is always underestimated. Any robotic application requires autonomous mobility, which requires smart sensors, mobility mechanisms, thinking, navigation and guidance, path planning, object detection and avoidance, object recognition and an understanding of their real world properties, self protection, energy maintenance and so on. Then to be useful a task or application needs to be inserted on top of all that and integrated in such a way that the mobility functions are subservient to the task. These are greater specifications than for perhaps any other technical challenge.

Secondly, and more importantly, unlike almost any other technology, robotics has a built in competitor. Imagine a new bio-technology, say a cure for cancer that really works. The asking price can be anything because there are no alternatives. Imagine a new communication technology that presents a thousand-fold increase in band-with for a thousandth of the price. Again checkbooks with unlimited upper limits would be ready to acquire it. Imagine a robot that can clean the floor, cook the meals, wash the car and mow the lawn. Unlike the previous examples, each of which has no competition, the competitor to this great robot is human labor. And human labor has a well-established cost structure. If the new robot costs more than the human to carry out the same tasks, our nature is to “buy” the human rather than the robot. And that has been borne out time and again in the history of robotics. Several random wandering household vacuum cleaners costing $3,000 dollars or more are on the market, and sales have been minuscule.

Not only is there an established price point for a robot because of the human competitor, there is an established performance standard. Roomba is a recent robotic vacuum cleaner that meets a realistic price point of around $300, and while its sales have been better than the more expensive ones, it reaches nothing like the multibillion-dollar opportunity that market research predicts. Part of the reason is that it uses a simplistic path planner that instead of cleaning efficiently and completely like a human does, spirals and doubles up on paths inefficiently. It was not a surprise to see it listed among useless technology products by some reviewers. Especially since the developers, iRobot, had been privy to the D’Entrecasteaux robot. Such is the human factor!

Nearly all of my efforts in smart mobile robotics have been successful, in a technical and commercial sense. The Tasman by serendipity, the others by hard work and hard-won experience. The rules of robotics are rigid. In addition to Asimov’s Laws of Robotics (there are 4, because he added a “group” rule), perhaps should be added the above commercial rules.

In July 1997, after almost 2 decades and some 200 robotic products, I turned to fixing failing technology companies, and have generated innovative techniques for doing this with guaranteed success and without sacking people, all gathered from the experience of succeeding in the toughest business of all, robotics. Yet even now I see the same problems in the robotics business and I wonder. Robin Murphy, a professor of robotics in Florida, told me recently that there is an increase in robotics activity, but she sees the startup companies making the same old mistakes.

Robotics truly has the potential to revolutionize our lives, for instance where it applies to things like medical health, which affects all of us, or where robots could be cleaning up antipersonnel mines cheaply and efficiently. But until all of the factors mentioned in this article are met the 15 year cycle will prevail.

In 15 years time, I predict that the then spokesperson for mobile robotics will predict that robots will be everywhere in another 15 years time.

Allan Branch

(Posted March 8, 2005)


A Different Analysis of Asimov's Laws of Robotics

Literary Expressions

Every now and then an expression in a work of literature, fiction or nonfiction, comes into the language. Easily the most famous in recent times for me was "Catch-22" from Joseph Heller's book of the same name. It was interesting to hear of his saying he almost called it something else, and to wonder if "Catch-18" would have caught on in the same way. Same with Heinz's 57 varieties, not that there is much literature in marketing. Their marketing team spent time working out a number that had the right sales ring to it, and no time at all on counting the actual number of product varieties or ingredients. Today we use "Catch-22" to mean a circular dilemma, the age-old egg or chicken challenge; and a "Heinz variety" is a mongrel dog, a "bitza" (bits of this and bits of that). There must be numerous examples, almost all of Shakespeare probably, but it would be a sure bet that none come from the works of science fiction. That is, with the possible exception of one.

Isaac Asimov, renown author and science fiction novelist, the champion of robotics in his fiction, created an expression that almost everyone at least knows, and which arises in surprising places. As an example, a few years ago I had turned around an American advanced technology company called Denning Mobile Robotics, Inc., from commercial disaster, an exercise that enjoyed coverage in local US newspapers and a full page feature in the Australian Financial Review, Australia's equivalent of the Wall Street Journal. Even in that austere, formal publication was a side box to the story describing Asimov's Three Laws of Robotics. Now I know that there are more than three, and I will get to that, but even those of us who know, still think at first of the expression as "Asimov's Three Laws of Robotics", proving that it has imprinted on our brains.

Laws of Robotics.

This article is not a treatment of Asimov's "Robotics Laws, but they need to at least be introduced to support the discussion later. The Laws, as numbered by Asimov, are listed here with their number, but in the chronological order in which they were introduced:

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

Zeroth Law

The Fourth Law Asimov conceived was given the number zero, like in thermodynamics, where a law logically superior to, or over encompassing existing ones, was discovered after the number "1" had already been taken; or in the numbering of the Julian/Gregorian calendars which forgot to call the first year the zeroth year. Our first year of life is our zeroth year, not our first, and only becomes "one" (a complete "one") on our first birthday anniversary, which means we should accurately have celebrated the second millennium not on December 31, 2000 but December 31, 2001. And physicists are always quoting the strange sounding "Zeroth Law of Thermodynamics".

There are references in the fiction of an Eleventh Law and a Minus One Law, (of even higher order than the Zeroth).

Modified First Law

As a direct consequence of the new fourth (Zeroth) Law, the First Law needed to be modified for logical (or symmetrical) consistency:

"A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law."

And the same reasoning applied to the others, which had the reference to the Zeroth Law added to them.

Logic and Ethics.

There are interesting analyses of the numerous challenges to the validity of the Laws by serious and not so serious critics; and discussion on the derivation and evolving variations of the Laws throughout Asimov's early writings, at the Internet site: "", on "the free encyclopedia.htm", with links to almost everything, so I will not discuss that side of it. Some of the parodies have been fun to read and are excellent reminders that laws of our invented societies, which depend on intangibles like opinions and beliefs, will never be like the rigid and universal laws of nature. There have been interesting discussions on this web site ( too.

And while I am interested in other analyses that seem to align the Laws with aspects of ethics, or morality, or religion; that is not my main concern either. As far as those go, in short, the efforts to expand the Laws from the first mention of the First Law seems to generate a hierarchy of ethical behaviors based on respect for living and sapient entities. For example one key element of the laws moving from the word "robot", to "human", to "humanity", to "sapient beings", to "living things".

What does interest me though is a third analysis that I have used over the years in my robotics seminars, but which has never been published. It is presented here for the first time. It ties in with the Branch definition of a robot, which has been published, and which is the start of the main discussion.

Robot Definitions

I did not like any of the stiff technical attempts at defining a robot, things like:

"A robot is a reprogrammable, multifunctional, machine or manipulator designed to move material, parts, tools or specialized devices through variable programmed motions for the performance of a variety of tasks."

See what I mean? So I devised my own:

"A robot is an attempt by mankind to make a machine that duplicates two or more intelligent aspects of advanced animals, necessarily including autonomous mobility."

Copyright 1979, Allan Branch

The "mobility" inclusion is due to the prime conditions for intelligence, per Moravec and Brooks. They suggest that mobility has been prerequisite and essential to the evolution of intelligence as we recognize it in higher organisms. There are no smart cabbages, for example, but there are smart, very smart in some cases, mollusks. This might be surprising to many, to learn that a non-vertebrate can be as intelligent as some mammals: I'm talking of the octopus, basically a shellfish related to oysters! An octopus can observe another open a jar to get food, and from just the observation go straight to a jar and do the same. Many primates cannot do that, except through trial and error, not by observing and rationalizing.

Mobility does not include say an elevator going up and down, so a talking elevator is not a robot under this definition, nor is a robotic arm with its back and forth motions. Some products I have developed and which I call robots are not robots by my own definition.

I like Joe Engelberger's fun definition of a robot better than all other definitions, and repeat it as often as I can:

"I can't define a robot but I know one when I see it!"

Real Life.

What does all this mean though, if anything, to real robotics? Being a roboticist has meant that I have had the greatest career of them all, creating the stuff of science fiction. For almost 20 years that is all I did, and I did it with gusto.

There were times of great joy. It is quite easy to show for example that I developed the world's first commercially successful mobile robot product (the Tasman Turtle 1979), the first truly practical and functional autonomous navigation and guidance system (sensory based Parametric Mapping 1984), the first mass produced robots (Elami and its derivative Omnibot 1983-1986), the first intelligent household robot vacuum cleaners (Florbot for General Electric in 1989 and D'Entrecasteaux for Moulinex in 1991). By the time I exited the industry in July 1997, I had conceived, developed, manufactured and sold more robots that the rest of the industry combined. Most of my robots were sold under the name of the client for whom I developed the product, such as Radio Shack, Tomy Corporation, Komatsu, Moulinex or Windsor Industries.

But despite the adventure and the fun, at times supplying the world with robots also caused me considerable angst. Robotics has a negative side to it. There were two issues that were constantly surfacing themes, cropping up at conferences, keynote addresses and media interviews. Firstly, it was often thrown at me that I was building machines to replace jobs, and secondly I would be reminded about the inherent danger of machines that think for themselves and how they might harm people. Concerns about the former based on valid social beliefs, but the latter generated from seeing too many movies perhaps. Nevertheless, both of these issues were not unimportant to me, and I needed to address them in order to be comfortable with what I was doing and with who I was.

Replacing Jobs.

Technology always replaces jobs. It is a fact of progress and is the history of our civilization. Accusations, often disparaging, that I was putting people out of work, were therefore difficult to defend. I thought about this long and hard, determining what the real result of new technology was. One first insight was that there were two different effects being confused. One was the replacement of jobs; the other was the removal of jobs. Most detractors had the vague idea that because jobs were being replaced, a very real fact, the actual number of jobs was decreasing, something that is definitely not true. The reason for the confusion was perhaps the knowledge that in this specific industry, robots were performing the human jobs, instead of humans. Because robots have an anthropomorphic aspect to them, people had the perception that an artificial person instead of a real person was doing the job, instead of seeing the robot as just another machine like a lathe or a grader. Those machines, even highly automated ones like a numerically controlled lathes, have operators, and so does every robot. The critical word "machine" is in my definition of a robot for this very reason.

To tackle this misperception, I made an analogy with the advent of automobiles. The arrival of cars meant forever the end of all the jobs associated with horse-drawn transport. Gone were stable hands, grooms, carriage makers, trainers, farriers, horse doctors, saddle makers, and so on. But in their stead came mechanics, chauffeurs, car designers, racing car drivers, road engineers, the petroleum industry, the rubber industry, car accessory manufacturers, and so on, and so on. Technology always creates more jobs than it displaces. In another example, since the beginning of the Industrial Revolution, the number of people working in agriculture has decreased from about 80% to 10% of the population. But because the population is so much greater, that 10% consists of more people working in those jobs than ever before.

So the problem was not one of the incoming technology, but one of retraining. It is a social problem of making sure the education system adjusts as quickly as the technology so that people have the right skills at the start, and can be trained again, (and again and again if necessary), for the current job types. A possible solution is perhaps to teach people the skills that facilitate retraining throughout life, like teaching people how to use libraries to garnish new knowledge as needed rather than teaching a fixed syllabus. The days of 6 years of indenture for an unchanging lifelong occupation have been gone for 200 years. Although the idea of retraining is well known today, it is rarely implemented effectively and never in a pre-active sense. Jobs are definitely lost, but retraining secures a new job within the new technology industry for any displaced employee. Not only that, but almost definitely a new job with more prestige and perhaps therefore with greater pay. The retrained employee could claim that instead of being a floor cleaner at a supermarket they were now an autonomous robot operator.

I had one other comment, stated facetiously, that the jobs would not be replaced if it were the employee who bought the robot instead of the employer. What if, I would say, an employee bought a robot and sent it to work to do their job while they went off to the beach for the day, collecting the paycheck at the end of the week? But our economic system is not structured like that. Employees do not think like that, although there is no reason why they could not. It is the employer who employs capital to stay current with and enjoy the benefits of new technology.

Harming Humans.

The other complaint was about robots harming humans. This was easy and I had a one-sentence answer, which will bring me back to Asimov and his Laws. I would simply say that I have never built a robot without an OFF switch!

Laws of good product design.

My own statements about robots having on/off switches started a train of though. As shown, I already thought of a robot as just a machine or a tool, but this reinforced it. Having a power control switch, (in fact some of my industrial robots had to have duplicate and redundant emergency disable switches), is akin to ensuring a machine does not harm a human. So the direction of thoughts should be becoming apparent. (An emergency off switch is sometimes called a "kill" switch, or even a "dead-man" switch, for good reason.)

One thing I dislike is a modern computer that has a power switch that does not in fact turn of the power. It is one of my pet hates. Some programmer has decided to let the computer decide for itself whether to follow the human instruction or not. I know why this is the case, so that the sudden shutdown does not harm the computer by cutting the power while it is in some vulnerable state. To me it just shows the limitations of computer designers who cannot make a machine that can be switched off instantly a human wants it to happen. If my industrial robots decided for themselves whether to obey the emergency off switch, I cannot imagine the consequences!

In science fiction, self-destruct mechanisms that cannot be stopped after initiation are devices that do not obey their human's commands. It is a classic technique for generating tension, as portrayed par excellence in the original Alien movie. "You now have 7 minutes and 45 seconds." But almost every device I am aware of inherently obeys the instruction given to it, from the trigger button on an electric drill to the countdown sequence of a rocket launch. Many integrated safety systems of a similar nature exist to protect the user. An elevator door close button or a garage door will firstly determine if it is safe to do so using "electronic eyes" to make sure a human is not in the way, and safety guards on cutting equipment with interlock switches are other examples.

Of course in the case of my computer, it is always possible to yank the power cord from the socket, so there is a default on/off switch. While the computer is trying to protect itself from harm by not shutting down until its internal state is safe to do so, it is not thinking of the user, it is actually thinking of itself; thinking as in the sense of protecting. In fact after it has decided, it often then says something like, "It is now safe to turn off the computer." Beyond computers, an example of another machine looking after itself would simply be say an automobile that is constructed to match the forces encountered in its normal operation, or even greater strength than needed for abnormal operation, as in a crash for example. While a computer that looks after itself in my example is frustrating, it can be seen that the concept is almost trivial, (a car that does not loose a wheel every time it goes around a corner), and an inherent aspect of all machine design.

The comeuppance of this is that there seem to be a set of rules for good product design, not always adhered to by inexperienced or sloppy designers, and almost never adhered to by software programmers. The rules are something like (I have spent no time in ensuring the definitions are robust, just the idea is being presented):

1. The device should be safe so that it does not harm the operator or user (an electric drill that does not electrocute you when you pick it up for example).

2. The device should be constructed to at least suit the task it is designed for (an electric drill that does not shatter when it does its drilling job for example).

3. The device should be controllable by a human, with no surprises (an electric drill that stops when the power switch is released for example).

And it would be easy enough to envisage another one, a design rule that prevents a device from running amuck, by using a safety gadget like a "dead-man switch" on a train. A switch that has to be constantly gripped by the user for the unit to work.

0. The device should be safe when left unattended so that it does not hurt anyone (an electric drill that does not start by itself when put away in a drawer for example).

Of course in real life we are interested in more than just harm to humans, such as damage to other living things like forests or our pets, and even extending to the damage of inanimate things, like our buildings, as when a run away car smashes into our house.

Ethics versus design.

It is clear from this discussion, that what Asimov has really done with his Laws was simply redefine the laws of good product design specifically for a particular machine, in this case a robot. In fact not just a robot, specifically for the fictional robots in his stories, he never suggests they should apply to real-world robots, missed by some critics who take it all too seriously.

The following is a table of this comparison:

Law Number Asimov's Robot Laws Good Product Design
Zero A robot may not injure humanity, or, through inaction, allow humanity to come to harm. The device should be safe when left unattended so that it does not hurt anyone.
One A robot may not harm a human being, or, through inaction, allow a human being to come to harm, except where such orders would conflict with the Zeroth Law. The device should be safe so that it does not harm the operator or user.
Two A robot must obey the orders given to it by human beings, except where such orders would conflict with the Zeroth Law or First Laws. The device should be controllable by a human, with no surprises.
Three A robot must protect its own existence, as long as such protection does not conflict with the Zeroth, First Laws or Second Laws. The device should be constructed to at least suit the task it is designed for.


There is something about a robot, (which is after all really an artificial person), that prevents us from seeing it at first as just a machine. Was this the rationale behind the term "the iron horse" when steam engines first appeared, those self powered machines, which must have seemed animated to the humans seeing them for the first time? Many of the wheeled hobby robots I designed in the early 1980's were taken home from work to play with, I mean to test, and while we were all fascinated with them, the pet collie I had named Kari ignored each of them. That is until I took home a prototype of Mr. Walker, a two legged robot that really stepped on two legs, changing its center of gravity to balance on each step, stepping over objects, up stairs, and so on. Kari attacked this robot. There is something about the fact that it moved on legs instead of wheels that was intrinsic to the dog's brain that said this was living. At some point the devices changed from being just machines that could move, to being like living things to the dog's brain. Our innate perceptions of life-like extend to non-walking robots I guess.

Copyright March 1, 2005, Allan Branch

(Posted March 1, 2005.)


"'Every single thing we buy or use is inferior to something else that has been invented or developed and never made it to the market.' This is a rule of thumb that I reiterate in my marketing sessions when turning around failed technology companies. It leads to several conclusions. For one thing, it means that better or best technology does not automatically determine commercial success nor technological progress. The former is important for the simple fact of the economies of technology, it takes money, and without it, technology development slows. The latter suggests that society does not always progress in the best direction, or that it does not progress as rapidly as it could. More importantly, there is another conclusion, it means that there is something else other than the inventiveness and genius behind technology development that determines its efficacy and influence. Any successful entrepreneur knows that. In the real world pragmatic things like, finance, personalities, market demands, contacts and such, really do play a big role in the way our society progresses, or fails to progress."

This excerpt from an article describing the historical cycles observed in the mobile robotics industry and their causes is to be published in the German Edition of MIT's Technology Review in March 2005.

The article by Allan Branch, CEO of Denning Branch International, and translated by Tobias Hürter, looks at many issues that help determine the success of any new technology products (not just robots) outside of the actual innovation of the technology.

This link will take you to the magazine's home page in a new window:

Click on the "X" at the top right hand corner to close this window.

Counter started February 9, 2005. Counter