IBM Watson and Artificial Intelligence
This is the third discussion of the semester, and I once again invite you to participate. Read the
following information and then add your comments to the discussion. You may pick a topic from
the information below, do some of your own research, and write your comments.
This discussion is about something very interesting: artificial
intelligence (abbreviated as AI). That phrase has been
around for a long time, and a lot of people have been
disbelievers for many years. Those people (and all the rest of
us) are finally starting to see products and applications that
are built using artificial intelligence technology. The
software that drives AI is amazing. Apple’s Siri, Amazon
Echo, and Google Home use AI. Even the Google search
engine now uses AI to try to predict what you are searching for.
I’m sure you’ve seen this. It was a bit eerie at first. There are thousands of other applications of
AI, some of which we don’t even know about but are there anyway. AI has arrived!
This discussion wouldn’t be complete without mentioning the computer hardware that drives AI
systems. The hardware company that tops that list happens to be IBM, a company that was the
dominant company (by far) in the computer industry for many years. Times have changed and
IBM is not “Number 1” any more. They’re in a very competitive industry and are investing
heavily in AI technology. Their system is called Watson, and is probably best known in “pop
culture” for winning the Jeopardy TV show a few years ago. Watson is much more than a game
show champion, of course, and we’ll find out more it as this discussion takes shape.
This discussion looks at the combination of hardware and software that’s behind one of the
major segments of the computer industry.
Let’s rewind the clock a little bit. So here we are in the late 1880s. The United States Census
Bureau is getting ready to conduct its once-in-a-decade count of the number of people in the
country. Why is this done every 10 years? Simple: The United States Constitution says we have
to. For you history buffs, Article 1, Section 2 says:
[An] Enumeration shall be made within three Years after the first Meeting of the
Congress of the United States, and within every subsequent Term of ten Years, in such
Manner as they shall by Law direct.
Why is this done? It’s the only way that our government can determine who will represent us in
Congress. The House of Representatives districts are determined by the census.
OK, back to the late 1880s. The Census Bureau
recognized that it had a potential problem in the making.
It had taken them over 7 years to complete the 1880
census. Not surprising, really. Everything was done by
hand. No machines (yet). Here’s what the 1880 census
form looked like.
Here was the looming problem. There was such a huge
population explosion in the United States during the
1880s that the Census Bureau started to wonder if they
could complete their 1890 counting in less than 10 years.
If they couldn’t have, they would have been violating the
Constitution. They needed a faster way to count the
people.
They found their solution in one of their bright
employees, Herman Hollerith. He designed and built a
mechanical machine that could input data from stiff pieces of paper with holes punched in them,
tabulate that data, and show the results. Along the way, he invented the machines that punched
the holes and the machine that displayed the answers. A complete solution! And, you know what,
it really worked. So well, in fact, that the census counting was completed in a matter of months.
Not bad.
Hollerith was smart in other ways, too. He quit his job and started a company he named The
Tabulating Machine Company, and that’s exactly what they built. Over the next few decades, the
company grew and grew, and another person became president. His name was Thomas Watson
(remember that name). In 1924, the company was renamed International Business Machines –
IBM. They built electro-mechanical machines for a while, and then ventured into electronics.
World War II came along, and the U.S. Government invested heavily in large electronic systems
that could help with the war effort. Two applications stood out: breaking the German secret
codes, and calculating the trajectory of the guns that were aimed at enemy targets. The first
electronic computers were the result of those expenditures. IBM wasn’t the inventor of those –
that work happened at some large universities, including Harvard and the University of
Pennsylvania.
Post-war America brought great prosperity, another population growth period (the “baby
boom”), and the need for electronic machines that could process the ever-growing amount of
data. We had to count people (the census again), money, inventory items, customers, bank
accounts, etc. That’s when IBM stepped up again, this time under the leadership of Thomas
Watson, Junior. The company started to build electronic “mainframe” computers that couldn’t be
beat. IBM dominated the field. So much that they later faced monopoly issues with governments
around the world. They won some and lost others, but still maintained a leadership role in the
computer industry for several decades.
IBM created their own operating systems, OS/360, MVS, z/OS – things you’ve probably never
heard of before. Most of the application programming was business-oriented using a
programming language that had been invented by the U.S.
Department of Defense in the late 1950s – the COBOL
language. That software language, designed by Admiral
Grace Hopper, worked on many types and brands of
computers and was the most efficient language around.
Some people argue that it still is. If you wanted a fast
program, you chose COBOL. In all the years since, more
lines of COBOL code have been written than any other
language (such as Java, C++ and a myriad of others). Your
cell phone records, bank statements, insurance bills, health
records, etc. are very likely to have been created using COBOL. I have a good friend who just
retired after spending 35 years writing large applications in COBOL. (An interesting point:
COBOL is hardly taught anywhere in the United States these days, but is still a major
programming language around the world. It’s just that the programming is now done in other
countries. As my friend says, “The COBOL jobs didn’t go away, they just moved elsewhere.”)
IBM dominated software development with COBOL applications.
IBM built less expensive “mid-range” computers in the 1970s and were very successful in
bringing computer technology to smaller businesses. You no longer had to spend millions of
dollars on a computer, but less than $100,000. (I know that’s still a lot, but it opened up the
availability of electronic data processing to many more organizations. I started working in the
computer industry when computers were that size and price. I worked for a computer
manufacturer in Hayward and our systems competed against the IBM mid-range systems.)
In the early 1980s, IBM recognized that the invention of the microprocessor by Intel Corporation
meant that computers cost less to build, so they decided to take a stab at the “personal computer”
market. Other companies had already designed small computers, most notably Altair, Apple and
Radio Shack. IBM came along and took a different approach. They chose a separate company,
Intel, to build the central processing units for their new systems, and chose other companies to
build memory, disk drives, monitors, printers, etc. IBM’s job was to put everything together in a
package called “The IBM Personal Computer.”
IBM needed an operating system to run their personal computers, and they set out to find another
independent company to design it (they were so used to making software for large systems that
they thought it would be better to let someone else do the design for small computers). They did
some searching and eventually found a VERY small company in Seattle named Microsoft, run
by a couple of young programmers – Paul Allen and Bill Gates. Their product was named Disk
Operating System (DOS), and IBM wanted it. Microsoft didn’t want to sell their product
outright, so they decided to “license” it to IBM. This was the smartest business move of all
times, because Bill Gates is now one of the richest people in the world. Paul Allen is right up
there, too (According to the latest Forbes ranking, he’s the 26th richest person in the U.S.).
You’re in this class right now because of the efforts of the entrepreneurs and inventors of that
time, including Gates, Allen, Steve Jobs and Steve Wozniak (Apple), Nolan Bushnell (Atari),
Robert Noyce and Gordon Moore (Intel), Larry Ellison (Oracle), Jack Kilby (the inventor of the
transistor while working at Texas Instruments), Ted Hoff (the inventor of the microprocessor
while working at Intel in 1971), and many others. A lot of them were influenced by Bill Hewlett
and Dave Packard (“H-P”), both graduates of Stanford University and the inventors of many
computer technologies that are still is use today.
The “IBM PC” was a smash hit. IBM assembled the computers,
even though they didn’t build all the hardware. The main
software for these computers was DOS. For a while, IBM called
it “IBM DOS,” but in reality it was Microsoft DOS. IBM’s main
goal was to put the “IBM brand” on these computers. They made
their specifications open to the public. Lots of companies started
to build “IBM compatible” products. I worked for one of those
companies for many years. Eventually, companies like Dell,
Compaq, Hewlett Packard, and other started to build complete PC systems and ended up giving
IBM competition in the business they had created in the first place.
Even Microsoft recognized its limits, and they made their software specifications open to the
public. The company I worked for also made “Microsoft compatible” software. (Apple, by
contrast, never did make its hardware and software specifications open to the public. They chose
to always build “proprietary” systems. As a result, Apple had less competition, which is probably
the main reason why they still charge more for its products than other companies do. Sure, their
products are popular, but they’re still proprietary.)
The “good times” continued for a long time. Thanks to Xerox, computer networks came into
existence (Xerox called their invention “Ethernet,” and you’ve read about that in the book and I
can guarantee that you use it all the time), which allowed people to have processing power on
their desktop (or laptop) while still being connected to other people and computers. This was the
polar opposite of what IBM mainframes were all about – the centralized processing of all data. It
was only natural that someone would build a computer to tie the personal computers together to a
central place where data was stored (but not processed), and that’s what a server is. IBM was a
player, but not the dominant one. Companies like Sun Microsystems and Silicon Graphics took
over that segment of the computer marketplace. (By the way: Sun is now owned by Oracle, a
major software company headquartered in Belmont.)
Microsoft designed Windows, which ended up being the replacement for DOS. At one point, the
Windows operating system controlled 95 percent of the personal computer marketplace (which,
or course, brought monopoly issues to Microsoft). The whole market started to move away from
the large mainframe systems towards networks and servers. The Internet came along, and that’s
full of servers. Personal computers got cheaper and cheaper. IBM found that it couldn’t complete
in that business, so it sold its personal computer division to a Chinese company named Lenovo.
They divested other divisions, too. Their mid-range systems stopped being as popular as they had
been.
IBM lost its dominance. They had to do something if they were going to continue to stay in
business and, it is their hope to regain their place atop the computer world.
They chose two areas: virtualization and AI. We will discuss virtualization later on, but today’s
main topic is AI.
Artificial Intelligence was first discussed as far back as the 1940s. The English mathematician,
Allen Turing, postulated that electronic machines that could solve virtually all math problems
and would eventually be able to “think.” By the mid-1950s Allen Newall, Herbert Simon, John
McCarthy, Marvin Minsky, and Arthur Samuel became the leaders of AI research. Their initial
successes led to over-confidence. Simon said, “machines will be capable, within twenty years, of
doing any work a man can do." Minsky wrote, "within a generation ... the problem of creating
'artificial intelligence' will substantially be solved." Unfortunately, it was harder than they
thought.
The math was harder than they imagined and the computers just weren’t fast enough to compute
all the formulas they needed to. For a long time, programmers kept trying to design a program
that could play chess and beat a world-renowned chess master. They designed better and better
algorithms to compensate for the fact that their computers weren’t fast enough to try all possible
alternative combinations to arrive at the answer. The “pop culture” interpretation of this was that
AI was all about playing games. Sadly, that misconception was perpetuated by Watson’s
appearance on Jeopardy.
Of course, the goal wasn’t to win games; it was to figure out if computers could change their
own programming as the result to doing something over and over, studying the successes and
failures, eventually arriving at a new result. This came to be called “machine learning.” For a
while, most people thought that was a joke. But, you know, that turned out to the breakthrough
that defined the whole AI movement.
Even now, companies like Google are studying and applying this type of technology. Google’s
DeepMind project developed a program that plays the game of “Go” – a game again (!), and one
that is much more complex than chess. Google put a new spin on it. They have the computer play
against itself to come up with ways to automatically improve its own algorithms needed to win
the game. Some people hear the word “game” and still think that’s what it’s all about. It’s not –
it’s about the very nature of learning.
One of the phrases I’ve heard throughout my career is that “computers only do what humans tell
them to do.” I’m sure I taught that at one point. Not anymore.
I worked for a software company in the 1980s that had a product named “Automatic Program
Generator.” It was a great product. You could type a few commands, and the program would
write hundreds of lines of code that solved the problem you were trying to solve. It was sort of a
“parlor trick” though – our program could only solve a small number of problems. It didn’t have
a way to modify itself or to “learn.”
It’s a lot different today. Software can and does change itself, based on the different types of data
and processing it encounters. The math is still very hard, but problem solving techniques have
really advanced a lot in the past 60 years. And, the big breakthrough is that computers are
MUCH faster than before. Computers no longer need to use “clever” ways to devise strategies to
win chess games (as human players do), they can analyze EVERY possible move and then pick
the best one. On top of that, machine learning means that they can also learn better strategies the
longer they play. It’s a double win.
All of this was not lost on IBM. Several years ago, they created a major initiative to develop and
sell AI solutions – both hardware and software. The blanket name for their product is “Watson”
(clearly a tribute to the early leaders of the company). IBM’s catchphrase is, “The Power of
Knowledge.”
There have been two goals of the Watson initiative: to discover ways that machines learn, and to
build related hardware and software products for a wide variety of applications. Their business
strategy looks like it is really working.
One of the first Watson products is “Watson Discovery.” IBM’s web site describes this it this
way: “Rapidly build a cognitive search and content analytics engine. Watson Discovery helps
developers quickly ingest data to find hidden patterns and answers, enabling better decisions
across teams.” I get that, it’s about patterns and decision making.
“Watson Conversation” is another major product. Here’s the pitch from the IBM Watson web
site: “Quickly build, test and deploy bots or virtual agents across mobile devices, messaging
platforms, or even on a physical robot to create natural conversations between your apps and
users.” That makes sense, too. Use machine learning to figure out better ways to communicate
with information systems.
With the “Watson Virtual Agent,” you can “quickly configure virtual agents with company
information, using pre-built content and engage customers in a conversational, personalized
manner, on any channel.” Ah, I get it, better (and less expensive) technical support systems. Or
even airline reservation systems. You can probably think of more applications…
This one is also interesting; the “Watson Knowledge Studio” lets you “teach Watson to discover
meaningful insights in unstructured text without writing any code.” What, no code? I’m in favor
of that! I’ve also noticed that “unstructured text” is becoming a major interest in computer
science. Indeed, one of the newer classes at College of San Mateo covers “NoSQL,” a type of
database management that deals with data other than the rigid “row and column” approach of
“relational database management.” If you are interested in studying a high “growth potential”
area in computer science, check this out.
IBM has even created a mechanism for programmers around the world to use Watson technology
to create their own AI products. IBM calls this the “Watson APIs” (“API” is “application
programming interface,” a term that’s been around for many years that describes how a software
company can make some of its features available to other programmers, so people don’t always
have to reinvent the wheel.) IBM’s web site says, “Use Watson language, conversation, speech,
vision and data insight APIs to add cognitive functionality to your application or service.”
None of this is theoretical; these are all products you can get now. IBM mainframes have been
“re-purposed.” IBM even went back to its roots of solving business computing problems, and has
AI products called “Watson Commerce” and “Watson Financial Services.” (During tax season
earlier this year, I saw a lot of advertising for H&R Block that touted their Watson technology. I
guess that means they are using AI to help minimize the taxes their customers have to pay.)
Even my field has a Watson product – “Watson Education.” It’s a powerful tool to discover how
people learn.
I have read a lot about “Watson Health,” an AI solution for the healthcare field. IBM calls this
“Cognitive Healthcare Solutions,” and describes is as follows:
Our purpose is to empower leaders, advocates and influencers in health through support
that helps them achieve remarkable outcomes, accelerate discovery, make essential
connections and gain confidence on their path to solving the world’s biggest health
challenges.
Whether advancing toward a big-picture vision or delivering meaningful experiences to a
single individual, our mission is to improve lives and enable hope. We arm health heroes
with the technology and expertise they need to power thriving organizations, support
vibrant communities and solve health challenges for people everywhere.
The latest Watson product I’ve seen advertised is the “Watson Tone Analyzer,” and it is said to
“understand emotions and communication style in text” (quoted from the IBM Watson website).
These claims include the following descriptions:
Conduct social listening
Analyze emotions and tones in what people write online, like tweets or reviews.
Predict whether they are happy, sad, confident, and more.
Enhance customer service
Monitor customer service and support conversations so you can respond to your
customers appropriately and at scale. See if customers are satisfied or frustrated, and
if agents are polite and sympathetic.
Integrate with chatbots
Enable your chatbot to detect customer tones so you can build dialog strategies to
adjust the conversation accordingly.
I tried the online demo, and it really works. I’ve included the URL to the demo page (see below),
where you can enter your own text for the “Watson Tone Analyzer” to dissect. Try it for yourself
for an interesting and eye-opening experience.
This is starting to sound like an IBM commercial, so I’ll stop.
Now it’s your turn. What do you think of AI? Do you know about any AI products that you use
that haven’t been mentioned here? Are you aware of other platforms besides Watson? Are you
aware of other Watson products that I haven’t mentioned here? Where do you think this will go
in the future? Could this be a solution to cybersecurity attacks? Could it make you a better
investor? Could it improve the way farmers grow food?
This may take some research on your part, and I’m really looking forward to reading your
contributions to this discussion.
Sources of information include the IBM Watson web site, lots of my own notes on the
development of artificial intelligence, and information from the U.S. Census Bureau.
Here is the link to the Watson Tone Analyzer:
https://www.ibm.com/watson/services/tone-analyzer/?cm_mmc=PSocial_Facebook-_Watson%20Core_Watson%20Core%20-%20Platform-_-NA_NA-_21857039_Clicktracker&cm_mmca1=000000OF&cm_mmca2=10000409&cm_mmca4=218570
39&cm_mmca5=46333619&cm_mmca6=aad5bc8d-7d72-4409-b0a5757418344733&cvosrc=social%20network%20paid.facebook.WDC%20API%20Carousel%20F
BC%202%20Tone%20Analyzer_SD%20Behav_DesktopMobileTablet_1x1&cvo_campaign=00
0000OF&cvo_pid=21857039
Purchase answer to see full
attachment