My decision to pursue a PhD is derived from my passion for science and engineering paired with my abilities in the field of machine learning and applied statistics. I consider myself fortunate to be part of the Department of Computer Science, University of Florida for my master studies. More importantly, I am glad to have two excellent professors in this field as advisors, Dr. Pader and Dr. Jilson, who are guiding me throughout my graduate studies. They assisted me to decide and pursue the courses and topics that interested me. During my first semester, I took the course Mathematical methods for Intelligent Systems that gave me a strong base for applied mathematics in the field of intelligent systems. Similarly, the research course Computational Neuroscience gave me an insight into applications of statistics, neural networks, and linear dynamical systems in a biological perspective. My keen interest towards the field of applied statistics, inspired me to take courses such as Machine learning and Neural Networks in the subsequent semester. In this context, I would like to give a brief outline of my master’s research projects, which are I found to be very exciting. The first project was to design a Handwritten Recognition system capable of classifying the digits using the Multilayer Perceptron architecture. Another project was a comparative study of machine learning methodologies such as Bayesian Linear Regression (BLR), Support Vector Machines (SVMs), and Relevance Vector Machines (RVMs), using handwritten character data from postal system. In the first phase, we analyzed the capability of mapping the features calculated on the input character images to membership values in different classes using BLR. In the second phase, the c... ... middle of paper ... ...ilar queries that belong to a particular domain. Representing user queries in a machine-readable format will help us building probabilistic models for queries. Moreover, combining queries to solve complex queries will be next milestone in question answering systems. Works Cited 1. D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation, JMLR, 2003 2. C. Bizer, J. Lehmann, G. Kobilarov, S. Auer, C. Becker, R. Cyganiak, and S. Hellmann. Dbpedia – a crystallization point for the web of data. Web Semantics: Science, Services and Agents on the WWW, September 2009. 3. F. M. Suchanek. Automated Construction and Growth of a Large Ontology. PhD thesis, Saarland University, 2009. 4. T. Hofmann, SIGIR '99: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval (ACM, New York, NY, USA, 1999)
The World Wide Web is full of information that can improve and enrich our lives. In order to benefit from all of this information it must exhibit some type of relevance to our everyday lives and one must also be able to read and interpret this information so that it is useful to us.
Waltz, David L. “Artificial Intelligence: Realizing the Ultimate Promises of Computing.” NEC Research Institute and the Computing Research Association (1996): 2 November 1999 .
This paper deals with the task performance of PLSA(Probabilistic Latent Semantic Analysis) and LDA(Latent Dirichlet Allocation). There has been lot of work done, reporting promising performance of topic models, but none of the work has systematically investigated the task performance of topic models. As a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly
Goertzel, B., & Pennachin, C. (2007). In Artificial General Intelligence. Heidelburg, New York: Springer Berlin. Retrieved on July 31, 2010 from Google books Database.
Summary- This book expert describes the fundamentals, history, and changes associated with Artificial Intelligence from 1950’s onward. The book provides a basic explanation that Artificial Intelligence involves simulating human behavior or performance using encoded thought processes and reasoning with electronic free standing components that do mechanical work.
In today’s fast paced technology, search engines have become vastly popular use for people’s daily routines. A search engine is an information retrieval system that allows someone to search the...
The first IR system was built which used indexes and concordances. When the first large scale information systems were developed, computers can search indexes must better than human, which required more detailed indexing. However, indexing could also become too expensive and time consuming. Therefore, the idea of free-text searching is initiated, which eliminates the need for manual indexing. Objections pointed out that selecting the right words might not be the correct label for a given subject. One solution is official vocabularies. The idea of recall and precision also came out as methods for evaluating information retrieval systems, and they showed that free-text indexing was as effective as manual indexing and much cheaper. New information retrieval techniques such as relevance feedback, multi-lingual retrieval were invented. The 1960s also was the start of research into natural language question-answering, and researchers began building systems ...
It is that passion which has grown over the years into a single-minded pursuit of Computer Science as a serious academic career, and led me to pursue a B.Tech in Information Technology at Delhi Technological University (DTU; formerly Delhi College of Engineering), one of the premier institutions in the country. I aspire to attain a doctorate in the areas of Artificial Intelligence (AI) and Natural Language Processing (NLP). I believe Berkeley’s MS in Computer Science will help me expand both the breadth and depth of my knowledge in these areas and allow me to identify a specialization for a subsequent doctoral degree.
Artificial Intelligence may come in many forms, but for the purpose of this paper, I have adopted the definition from The Columbia Encyclopedia (2008), which states that Artificial Intelligence (AI) is a discipline of computer science that aims to focus on the creation of machines that can mimic intelligent human behavior. It is the attempt to give computers human reasoning and thought processes. Humans have always had an interest in the design, creation and application of smart machines. Consequently, with the discovery and introduction of computer systems and with the decades of research in programming that has followed, humans have now realized that many of their ideas may be possible by the development of these systems. The most intriguing issue with this field of study is that as time passes, technology changes and so does the definition of Artificial Intelligence. It is, in a lighter definition of intelligence, the distinct applications of unnaturally occurring systems or application of artificial systems that rely on use of different knowledge levels to achieve set goals. Artificial Intelligence began with the theoretical work of mathematician Alan T...
As I watched my mother rush to get the pot to boil some water with tears in
IR systems receiving such queries need to fill in the gaps of the user’s underspecified query. For example, user typing “nuclear waste dumpling” into the search engine such as Google is probably searching for multiple number of documents that describing the topic. Some of the documents might not archive what user need as the search engine search documents that relate to the three worlds only. The content being searched is typically unstruc...
There will be difference in the level of details given by different ontologies on the same domain. This poses extra challenges to select the ontology that has the accurate level of detail. Ontology selection is the process of identifying one or more ontologies that satisfy certain criteria. These criteria can be related to topic coverage of the ontology. The actual process of inspecting whether ontology satisfies certain criteria is fundamentally an ontology evaluation task. In this approach ontology concepts are compared to a set of query terms that represent the domain. It first tries to determine ontologies that contain the given keyword. If no matches are found, it queries for the synonyms of the term and then for its hypernyms. The ontology se...
An information retrieval system (IRS) is the activity of obtaining information resources relevant to an information need for a collection of information resources. Searches can be based on metadata or on full text (or content based) indexing. The automated information retrieval system is used to reduce what has been called “information overload”. Many universities and public libraries use information retrieval system to provide access to books, journals, and other documents. Web search engines are the most visible information retrieval application.
Croft, W.B. (1995). What do people want from information retrieval?: The top 10 research issues for companies that use and sell IR systems. Retrieved October 26, 2011 from http://www.dlib.org/dlib/november95/11croft.html
T. Mitchell, Generative and Discriminative Classifiers: Naive Bayes and Logistic Regression. Draft Version, 2005 download