Watson, a computer that can demonstrate its capabilities using natural language which can understand and answer questions as quickly as possible by quickly searching within its large scale data base and choosing out the vital words that right answer to the questions. Watson can do more than just answer questions in a game but rather be useful in any types of business and can also be used for scientific research and discoveries. With its growing platform, developers have been enhancing its capabilities that can further help others in incorporating its technology. Watson used 3000 processor and a terabyte of RAM because to use multiple algorithms to process the information. Each processor has a 2880 power7 cores which runs at 3.55 GHz and has …show more content…
four threads per core. It is said that if Watson runs on a single core answering a question it could take at list six hours, however by using 3000 processors it was able to cut down the time frame to 2 to 6 seconds due to its large parallel architecture to speed up its processing power. Watson was able to accomplish this task is by trying to find the keyword matching the algorithms in its database. Since it was able to calculate each words and phrases it interpret statistical paraphrasing which is conveying ideas by using different words to perform its information processing. Since the computer had over 200 million pages of information in its database to be process, this needs a lot of computing power to fully utilize its potential. Because of the amount of processor, Watson can access up to 200 million pages of structured and unstructured data content and is stored in its drive. This includes all the text in Wikipedia and other internet sources that contain its information. It is said that Watson can operate at least 80 teraflops which is about 80 trillion operations per second. Using multiple number of processor makes a big impact especially when using in a server environment. Since every server is required for a program to use multiple processor that are installed in the same server. However this depends on how it uses parallel applications to utilize its multiple processing power. Developers of Watson has to make sure that any data information whether it is in the dictionaries or encyclopedia, they are used by the component’s power and uses its full capabilities. Since it utilizes its power to organizes its content, more processor means faster execution in data processing just as long it is optimize correctly (Rajesh, 2011). When Watson boots up it is said that it will take around 10.8 Terabytes of data to be loaded into its 15TB of RAM, however surprisingly it only uses 1TB when it is processing information especially when answering questions. So is it a big deal in adding more RAM? Of course it does make a difference, since more RAM makes any computer faster. Using RAM in Watson help a great deal in storing at least 200 million pages of data on just about anything in can process around the world. Since Watson also has hard drives, it does not use it for storing data because it is to slow to access. That is why it uses RAM to store its data so that it can access it faster. Watson have access to its memory with uploaded references materials, books, encyclopedias, and any kind of material that contains useful information and can access the data really quickly. The secret on Watson’s performance is that it resides in its hardware and memory that makes it process really quickly and simultaneously access thousands of hypotheses about any possible answers you through at it (Earley, 2011). It is said that Wikipedia’s article full data is about 6.5GB compressed and 30GB uncompressed which is only 1/500th of Watsons RAM and in addition, it can hold the library of Congress, which all the books and media which is a lot. Therefore, doubling the RAM is more than enough for storing all the data needed. However it is still possible to expand its memory if the developers intended it to enlarge its library of data to be stored and more will cost higher expended due to upgraded components needed to increase its RAM capacity. Watson has 750 servers and is clustered over a 10GB Ethernet network. What the server does is that it virtualizes using a kernel based virtual machine which creates a total processing capacity of 80 teraflops that translates to one trillion operations per second. Watson is said that it is using cluster of Power 750s which is equivalent of two terabyte I/O nodes, which totals to 4 TB data repository. This might be too large for a regular company to possess however in order to compete in a game show, this would require a large amount of knowledge in its database. Imagine, if 20 volumes of encyclopedia which is said to be total of around 4 TB of data. Therefore, Watson needed at least 4 TB or more to get every information it can store in its data in order to process information, and answer questions correctly (Rennie, 2011). Jeopardy is a popular game show and the object of this game is to pit against human contestant against each other in answering various questions over different categories of topics and gets either a score or gets a penalty if answered incorrectly. However if a computer is put against other human players, it would need to understand and produce accurate answers to accommodate complex natural language and have to do it in a time limit which is also another great challenge. This is what Watson was able to do to compete with other human individuals and should be able to do at least answer 70 percent of the question ask and at least 3 seconds or less to answer the question. To win in the jeopardy game requires precisely answered questions with confidence and an individual should have the confidence on his or her own answer and believe the final answer. However in Watson computer, it tries to first access the question, then tries to compile every data it can generate over its resources. Imagine for each answer, pieces of information are generated and Watson then combines every single data it can produce until it finds the evidence to justify its answer. However, generating successful strategies for resolving answers is quite challenging. There are different challenges that arises in the DeepQA and its characteristics, let start with the answers being ask, since some questions may be similar it might be hard for Watson to compute. Second challenge was to differentiate questions classes for example; word puzzles and language translations, might be challenging since Watson had little training over this type of question classes. Third challenge that makes it harder for Watson is to visualize that some questions have more or less importance at different stages of ranking. Fourth challenge is that some of the features are exceptionally diverse since some questions comes from a variety of distinct algorithms that were individually developed. And lastly DeepQA have a large class imbalance which means it will come up with a large amount of possible answers and then it might have a problem since all the correct answer it will produce might be wrong (Rennie, 2011). Challenges that needs to be incorporated in Watson’s learning is vital for its development, and to compete with human beings Watson has to conduct its effectiveness in properly accessing its settings and needs to have extensive research of different variations of methodology to get it on par with human individual level. It is said that Watson framework has been extensively been used more than 7000 experiments so that in the DeepQA can be achieve and its learning techniques can be adopted in the game (Mearian , 2011). It is said that the most powerful and fastest supercomputers in the world uses Linux based operating system. Watson uses SUSE Linux Enterprise Server 11 and is on 10 racks in 750 servers to power the computer so that it can compete in the Jeopardy game show. Watson’s Power7 platform is considered a high performance, high capacity platform that works best with Linux Enterprise Server since it is the fastest operating system for this type of hardware. Linux has been around since 1993 and the advancement of working with Linux kernel and kernel related technologies has been supported to provide unmatched performance on systems with multicore processors and multipathing and I/O capabilities making it best choice for high performance systems like Watson (Mearian , 2011). If there is chance to use a different operating system for Watson, than the answer is no because there are no other operating system that is faster than Linux because Linux works best especially on how the system and its components are configure to run at maximum performance in Linux based systems. And in the “Top 500 list of world fastest and most powerful computers” around 459 are using Linux operating systems (WALTHAM, 2011). When it comes to security for Hadoop, it is challenging because not all interactions follow the usual client server pattern where the server validates the client and permits each operations by check an ACL. In addition some user’s credentials can pass the services through secondary services or under the Hadoop system which can or cannot be trusted. In case of unauthorized user may be able to access an HDFS file via the RPC or via HTTP protocols (Johnson, 2011). Apache Hadoop can bring new nodes if needed, as well as adding nodes without changing the data formats. Watson’s scalability is possible because of its natural language processing system, it can constantly understand questions and utilize its logic for decisions making process and able to adapt to learned lesson it its database (Coss, 2012). The good thing with Apache Hadoop, if a node is lost it will still function since the system redirects the work to another location of the data and will still continue to process without the node. That is why Apache Hadoop has a fault tolerant (Coss, 2012). Since Watson is an open source for developers to understand and develop, its availability is high. Especially with Watsons cloud API not to mention its capabilities of learning it can be used for healthcare, finance and other type of business that Watson can be further developed. Apache UIMA found in Watson’s frameworks has a standard base infrastructure and components that facilitate the analysis and annotation of an array of unstructured content like video, text and audio. So in order for Watson to gather answers, Watson understands the question and compute its confidence in a question it uses Apache UIMA for real time content and also its natural language processes. Watson takes advantage on using UIMA to exploit modern parallel architectures, and manages all the work flow of data and communication between processes (Private Cloud Team, 2013). Watson has many components that consist of information retrieval, machine learning, natural learning processing, semantic web and cloud computing making it a highly industrial level of adaptive learning system which can be used for different business applications like healthcare and for finance. Using Apache software and its powerful hardware, Watson can be even further enhance to open source through developers to make it even more innovative and take advantage of its components (Murdock, Fan, Lally & Shima, 2012). Watson can be used for any IT ecosystem because of its cloud services and with its release of a development tool kit. This tool kit provides access to the application programing interface of Watson and can be further develop because of its open source nature. Since Watson is a learning machine, it can utilize its technology and understand its duties to further assist business that can be optimize for their needs. This however is possible if developers can study and implement Watson to the right info structure for the IT ecosystem. According to Tony Pearson a senior consultant of IBM said that Watson only uses around 1 Terabytes of data information to process its real time answering of questions on the game Jeopardy with a processing power of 80 teraflops. Compared to a human brain which can hold about 1.25TB of informational data which is roughly estimated around 100 Teraflops of performance. This studies shows that Watson is almost like 80% human in comparison. And not mentioned that Watson can answer within three seconds makes it even almost like human like reasoning. Since Watson uses a small amount of percentage of its large capacity of memory to get the answers either right or wrong (Murdock, Fan, Lally & Shima, 2012). Every computer today functions as a work companion and every one of them are getting more sophisticated as technology develops.
IBM however has done the imaginable, created a cognitive computer that can compete against human which is an amazing feat and is a great accomplishment in the computing industry. However it does not stop there as IBM Watson was introduce in medical facilities, and is able to do data analysis and physician training. This made a breakthrough in medical science. Since it is able to help doctors match patients with clinical trials and also observer and improve treatment plans (Yuan, 2011). Watson is just the first implementation of a smart computer that can understand and learn new data. Since cognitive systems are being perfected and fine-tuned to better deal with immense amount of information. Watson can learn raw information, since it canned be trained so it can be introduced to complex health care challenges that is hard to treat and the outcome is known. By letting it learn and diagnose the sickness, it provide the best treatment it recommends. And possibly one day this system will be able to learn on its own without any human programing. In addition to processing power, cognitive system will also be able to communicate with people, be able to analyze behaviors, languages and our mannerism which will definitely shape the
future
In the essay "Toward An Intelligence Beyond Man’s" by Robert Jastrow, the author showed his view on computer intelligence and predicted that computer intelligence will be a new kind of evolution. Jastrow stateed that computer nowadays is as intelligent as human brain; they can communicate with human, learn from experience, and raise logical questions. The more complex the computer, the better they imitate human. He predicted that computer will as important as life in future years. Then, Jastrow used the example of Arthur Samuel and IBM computer to show computers can learn faster through motivation, even they do not have emotions and drives as human do. He also points out that computer and human brain share some characteristics; they both freeze out when handle too many tasks, and they outclass fast decisions under a crisis. Jastrow said even human still have the control power, computers learn much faster than humans’ intelligence. Then, in an ultimate situation, computers and human w ill become partners; they completely depends on each other to survive. However, Jastrow thought this partnership will not stay long; as computer will become more and more clever , but human evolution of intelligence is almost finished. He suggested that computer will be the new kind of intelligence which surpass human, as a new evolution of life. He said the history had proved it takes a million year for human evolution. It took less time , compare to a billion years of evolution from worm to human. By the incredibly fast rate of technology improvement, Jastrow thought computer will evolve in a much shorter period of time.
John Broadus Watson was a famous American psychologist who lived between 1878 and 1958. He was born in Greenville, South Carolina to Pickens and Emma Watson and was the fourth of six children. The family was not well off financially and John did not have an easy childhood. In spite of the poverty that engulfed the family, John’s father turned into an alcoholic who cared less for his family. However, Emma, John’s mother was a devoted religious woman who struggled to take care of her children with less support from her husband. In 1891, John’s father left the family and disappeared after engaging in extra marital affairs with other women. The infidelity strained his marriage with Emma and the relationship with his children. After the disappearance of his father, John became unruly and confused due to the lack of full parental care of both parents. He became defiant at school and did not want to listen to advice from his teachers. He bullied fellow students and was involved in other antisocial behaviors which were quite unacceptable in the school environment, further more he became violent and even rebelled against his mother (Buckley, 1989).
Named after IBM’s first CEO Thomas J. Watson, Watson is a supercomputer able to answer questions posed in natural language. It first became famous in early 2011 for beating a couple of the best players of Jeopardy in a 3 day streak game. He beat Ken Jennings and Brad Rutter, the first had 74 winnings in a row and the second had earned a total of $3.25 million. At the time Watson was about the size of a room. It was hot and very noisy because of the cooling systems. He was represented in the room by a simple avatar. Today, Watson has changed a lot. Now it is more business friendly and has lost a lot of weight. From a Jeopardy winning computer it has become a successful commercialized supercomputer. In the following chapters I will talk about its origins, its actual situation and a little bit about its future.
One of the hottest topics that modern science has been focusing on for a long time is the field of artificial intelligence, the study of intelligence in machines or, according to Minsky, “the science of making machines do things that would require intelligence if done by men”.(qtd in Copeland 1). Artificial Intelligence has a lot of applications and is used in many areas. “We often don’t notice it but AI is all around us. It is present in computer games, in the cruise control in our cars and the servers that route our email.” (BBC 1). Different goals have been set for the science of Artificial Intelligence, but according to Whitby the most mentioned idea about the goal of AI is provided by the Turing Test. This test is also called the imitation game, since it is basically a game in which a computer imitates a conversating human. In an analysis of the Turing Test I will focus on its features, its historical background and the evaluation of its validity and importance.
The official foundations for "artificial intelligence" were set forth by A. M. Turing, in his 1950 paper "Computing Machinery and Intelligence" wherein he also coined the term and made predictions about the field. He claimed that by 1960, a computer would be able to formulate and prove complex mathematical theorems, write music and poetry, become world chess champion, and pass his test of artificial intelligences. In his test, a computer is required to carry on a compelling conversation with humans, fooling them into believing they are speaking with another human. All of his predictions require a computer to think and reason in the same manner as a human. Despite 50 years of effort, only the chess championship has come true. By refocusing artificial intelligence research to a more humanlike, cognitive model, the field will create machines that are truly intelligent, capable of meet Turing's goals. Currently, the only "intelligent" programs and computers are not really intelligent at all, but rather they are clever applications of different algorithms lacking expandability and versatility. The human intellect has only been used in limited ways in the artificial intelligence field, however it is the ideal model upon which to base research. Concentrating research on a more cognitive model will allow the artificial intelligence (AI) field to create more intelligent entities and ultimately, once appropriate hardware exists, a true AI.
In Turing’s test, an isolated interrogator attempts to distinguish the identities between discreet human and computer subjects based upon their replies to a series of questions asked during the interrogation process. Questions are generally generated through the use of a keyboard and screen, thus communication can only be made through text-only channels. For example, a sample question would contain something along the lines of “What did you think about the weather this morning?” and adequate responses could include, “I do tend to like a nice foggy morning, as it adds a certain mystery” or rather “Not the best, expecting pirates to come out of the fog” or even “The weather is not nice at the moment, unless you like fog”. After a series of tests are performed, if the interrogator fails at identifying the subject more than 70 percent of the time, that subject is deemed intelligent. Simply put, the interrogator’s ability to declare the machine’s capability of intelligence directly correlates to the interrogator’s inability to distinguish between the two subjects.
- Why Watson is so important to the way the story works as an example
When introducing Watson to the new client, Count Von Kramm, Holmes admits of Watson’s usefulness stating: “This is my friend and colleague, Dr. Watson, who is occasionally good enough to help me in my case.” (Conan Doyle)
As our research into science and technology ever increases its seems inevitable that in the near future Artificial Intelligent machines will exist and become part of our everyday life such as we see with modern computers today.
As my first point I would like to comment on the use of Watson as a
Most of the day the human mind is taking in information, analyzing it, storing it accordingly, and recalling past knowledge to solve problems logically. This is similar to the life of any computer. Humans gain information through the senses. Computers gain similar information through a video camera, a microphone, a touch pad or screen, and it is even possible for computers to analyze scent and chemicals. Humans also gain information through books, other people, and even computers, all of which computers can access through software, interfacing, and modems. For the past year speech recognition software products have become mainstream(Lyons,176). All of the ways that humans gain information are mimicked by computers. Humans then proceed to analyze and store the information accordingly. This is a computer's main function in today's society. Humans then take all of this information and solve problems logically. This is where things get complex. There are expert systems that can solve complex problems that humans train their whole lives for. In 1997, IBM's Deep Blue defeated the world champion in a game of chess(Karlgaard, p43). Expert systems design buildings, configure airplanes, and diagnose breathing problems. NASA's Deep Space One probe left with software that lets the probe diagnose problems and fix itself(Lyons).
In order to see how artificial intelligence plays a role on today’s society, I believe it is important to dispel any misconceptions about what artificial intelligence is. Artificial intelligence has been defined many different ways, but the commonality between all of them is that artificial intelligence theory and development of computer systems that are able to perform tasks that would normally require a human intelligence such as decision making, visual recognition, or speech recognition. However, human intelligence is a very ambiguous term. I believe there are three main attributes an artificial intelligence system has that makes it representative of human intelligence (Source 1). The first is problem solving, the ability to look ahead several steps in the decision making process and being able to choose the best solution (Source 1). The second is the representation of knowledge (Source 1). While knowledge is usually gained through experience or education, intelligent agents could very well possibly have a different form of knowledge. Access to the internet, the la...
Humans can expand their knowledge to adapt the changing environment. To do that they must “learn”. Learning can be simply defined as the acquisition of knowledge or skills through study, experience, or being taught. Although learning is an easy task for most of the people, to acquire new knowledge or skills from data is too hard and complicated for machines. Moreover, the intelligence level of a machine is directly relevant to its learning capability. The study of machine learning tries to deal with this complicated task. In other words, machine learning is the branch of artificial intelligence that tries to find an answer to this question: how to make computer learn?
Shyam Sankar, named by CNN as one of the world’s top ten leading speakers, says the key to AI evolvement is the improvement of human-computer symbiosis. Sankar believes humans should be more heavily relied upon in AI and technological evolvement. Sankar’s theory is just one of the many that will encompass the future innovations of AI. The next phase and future of AI is that scientists now want to utilize both human and machine strengths to create a super intelligent thing. From what history has taught us, the unimaginable is possible with determination. Just over fifty years ago, AI was implemented through robots completing a series of demands. Then it progressed to the point that AI can be integrated into society, seen through interactive interfaces like Google Maps or the Siri App. Today, humans have taught machines to effectively take on human jobs, and tasks that have created a more efficient world. The future of AI is up to the creativity and innovation of current society’s scientists, leaders, thinkers, professors, students and
In the past few decades we have seen how computers are becoming more and more advance, challenging the abilities of the human brain. We have seen computers doing complex assignments like launching of a rocket or analysis from outer space. But the human brain is responsible for, thought, feelings, creativity, and other qualities that make us humans. So the brain has to be more complex and more complete than any computer. Besides if the brain created the computer, the computer cannot be better than the brain. There are many differences between the human brain and the computer, for example, the capacity to learn new things. Even the most advance computer can never learn like a human does. While we might be able to install new information onto a computer it can never learn new material by itself. Also computers are limited to what they “learn”, depending on the memory left or space in the hard disk not like the human brain which is constantly learning everyday. Computers can neither make judgments on what they are “learning” or disagree with the new material. They must accept into their memory what it’s being programmed onto them. Besides everything that is found in a computer is based on what the human brain has acquired though experience.