Technical Pub. Pune, - Artificial intelligence · 0 Reviews. What people are saying - Write a review. We haven't found any reviews in the usual places. Artificial Intelligence [Mrs. Neeta Deshpande] on ronaldweinland.info *FREE* Story time just got better with Prime Book Box, a subscription that delivers editorially. Artificial Intelligence for Pune University by Neeta Deshpande, , available at Book Depository with free delivery worldwide.
|Language:||English, Spanish, French|
|Genre:||Health & Fitness|
|ePub File Size:||23.43 MB|
|PDF File Size:||14.12 MB|
|Distribution:||Free* [*Register to download]|
Artificial Intelligence. Front Cover · Neeta Deshpande. Technical Publications, - pages. 3 Reviews. Definition, What is A.I? Foundation of A.I., History, . User Review - Flag as inappropriate. This is a very good book which i have gone through, explanation is given in very easy to understand. A lot of problems are. Definition, What is A.I? Foundation of A.I., History, Intelligent Agents, Agent architecture, A. I. Application (E Commerce & Medicine), A. I. Representation.
Search Results Friday, October 24, Download artificial intelligence book of charulatha publications EBooks Read online artificial intelligence book of charulatha publications EBooks Download artificial intelligence book of charulatha publications EBooks Read online artificial intelligence book of charulatha publications EBooks Over a million songs. Hundreds of playlists. Cancel anytime. Are you a college student? Get all the benefits of site Prime at half the cost. Foundation of A. Representation, Properties of internal representation, Future of A.
We have been in the industry for the last 25 years and are known for quality scholarly publications in Engineering, Pharmacy and Management books. Founded in , the Association for the Advancement of Artificial Intelligence AAAI formerly the American Association for Artificial Intelligence is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. There are myriad technical publications for the reader desirous of understanding the" plumbing" of Artificial Intelligence.
This book, however, focuses more on the business benefits which accrue to the finance professional and his or her organization through harnessing the power of AI. Artificial Intelligence. Neeta Deshpande, Nanda Yadav. Technical Publications, User Review Flag as inappropriate.
Computers can support the variety of ways learners construct their own understanding. Students who gather information from the Internet can be self-directed and independent. They can choose what sources to examine and what connections to pursue. Depending on the parameters set by teachers, the students may be in complete control of their topics and their explorations. Students can work through a computer-based activity at their own pace.
Rather than 25 individuals working together on one activity, technology allows independent completion of work. Notes Computer software can mix text, pictures, sound, and motion to provide a variety of options for learners.
Multimedia software will not be the only classroom resource, but it can contribute richness and variety to student work. Students can build on their own understanding by using computers as resource tools, as work stations for individual learning, or as communication channels to share their ideas with other learners. Individual understanding and experiences must be shared and compared to curriculum content.
By uncovering students individual understandings, teachers can determine the influence of students prior knowledge and further their education through new experience. Computers can be used to assist active experiences gathering data and resources, conversing with colleagues, struggling through a challenging puzzle or application or they can assist in reflection.
For example, while an on-line conversation through e-mail is an active event, such discussions usually prompt reflection. They help us think about ideas and check our understanding. In another reflective application, teachers can enlist computers as authoring tools for students journals which are excellent vehicles for thoughtful examination of experience. Notes Introducing technology into the learning environment can encourage cooperative learning and student collaboration.
If they are allowed to converse, most students like to talk about their computer work and share their strategies. Classroom activities that are structured so that computers encourage collaboration build on learners desire to communicate and share their understanding. It takes planning and intervention to build successful cooperative groups with or without computers, but groups that use computers as teamwork tools have a better start toward collaborative work.
Beyond the classroom, computer networking allows students to communicate and collaborate with content experts and with fellow students around the globe.
Communication tools like e-mail, bulletin boards, and chat groups allow teachers to exchange lesson plans and teaching strategies and create a professional community.
The use of real world tools, relevant experiences, and meaningful data inject a sense of purpose to classroom activity. Part of the mission of educational institutions is to produce workforceready graduates who can, among other things, manipulate and analyze raw data, critically evaluate information, and operate hardware and software.
This technological literacy imparts a very important set of vocational skills that will serve students well in the working world. Technology has allowed schools to provide greater assistance to traditionally underserved populations. Assistive technology such as voice recognition systems, dynamic Braille displays, speech synthesizers, and talking books provide learning and communication alternatives for those who have developmental or physical disabilities.
Research has also shown that computermediated communication can ease the social isolation that may be experienced by those with disabilities. Computers have proved successful in increasing academic motivation and lessening anxiety among low ability students and learning disabled students, many of whom simply learn in a manner different from that practiced in a traditional, non-technological classroom. Some SR systems use training where an individual speaker reads sections of text into the SR system.
These systems analyze the persons specific voice and use it to fine tune the recognition of that persons speech, resulting in more accurate transcription. Systems that do not use training are called Speaker Independent systems. Systems that use training are called Speaker Dependent systems.
Speech recognition applications include voice user interfaces such as voice dialing e. It is a medium of human expression, i. So it is also called a propositional net. Semantic nets convey meaning. Semantic nets are two dimensional representations of knowledge. Mathematically a semantic net can be defined as a labeled directed graph.
Semantic nets consist of nodes, links edges and link labels. In the semantic network diagram, nodes appear as circles or ellipses or rectangles to represent objects such as physical objects, concepts or situations.
Links appear as arrows to express the relationships between objects, and link labels specify particular relations. Relationships provide the basic structure for organizing knowledge. The objects and relations involved need not be so concrete. As nodes are associated with other nodes semantic nets are also referred to as associative nets. Figure 3.
Diagram of Semantic Networks. In the above figure 3. Inheritance Reasoning Unless there is a specific evidence to the contrary, it is assumed that all members of a class category will inherit all the properties of their superclasses. So semantic network allows us to perform inheritance reasoning. For example Jill inherits the property of having two legs as she belongs to the category of FemalePersons which in turn belongs to the category of Persons which has a boxed Legs link with value 2.
Semantic nets allows multiple inheritance.
So an object can belong to more than one category and a category can be a subset of more than one another category. Inverse links Semantic network allows a common form of inference known as inverse links.
For example we can have a HasSister link which is the inverse of SisterOf link. The inverse links make the job of inference algorithms much easier to answer queries such as who the sister of Jack is. On discovering that HasSister is the inverse of SisterOf the inference algorithm can follow that link HasSister from Jack to Jill and answer the query.
Disadvantage of Semantic Nets One of the drawbacks of semantic network is that the links between the objects represent only binary relations. There is no standard definition of link names. Advantages of Semantic Nets Semantic nets have the ability to represent default values for categories. In the above figure Jack has one leg while he is a person and all persons have two legs. So persons have two legs has only default status which can be overridden by a specific value.
Although these were developed independently both originating in the early s , and are different in important ways, they have sufficient similarities to be considered together. One influential proponent of frame-based systems is Marvin Minsky ; a champion of script-based systems is Roger Schank see Schank and Abelson, The key idea involved in both frames and scripts is that our knowledge of concepts, events, and situations is organized around expectations of key features of those situations.
Notes Example: Consider a stereotypical situation, such as going to hear a lecture. Ones knowledge of what might go on during such an event is based on assumptions. For instance, it can be assumed that the person who actually delivers the lecture is likely to be identical with the person advertised; that the lecturers actual time of arrival is not more than a few minutes after the advertised start; that the duration of the lecture is unlikely to exceed an hour and a half at the maximum; and so on.
These and other expectations can be encoded in a generic lecture frame to be modified by what actually occurs during a specific lecture. This frame will include various slots, where specific values can be entered to describe the occasion under discussion. For example, a lecture frame may include slots for room location, start time, finish time, and so on.
Abelson and their research group, and are a method of representing procedural knowledge. They are very much like frames, except the values that fill the slots must be ordered. A script is a structured representation describing a stereotyped sequence of events in a particular context. Scripts are used in natural language understanding systems to organize a knowledge base in terms of the situations that the system should understand.
Language Propositional logic is the logic of propositional formulas. Propositional formulas are constructed from a set var of elementary or atomic propositional variables p, q, and so on, with the connectives: If and are formulas, then so are: So p is a formula,: We add the simple symbol?
We write this definition in Backus-Naur Form bnf notation, as follows: The language of propositional logic is called Lprop. Brackets are important to ensure that formulas are unambiguous. In a similar way we formulate axiom schemata and inference rules by means of formula variables. Truth Values and Truth tables In propositional logic, the models are truth valuations. A truth valuation determines which truth value the atomic propositions get. It is a function v: A proposition is true if it has the value 1, and false if it has the value 0.
The truth value of falsum is always 0, so v? Whether a complex propositional formula is true in a given model valuation can be calculated bymeans of truth functions, that take truth values as input and give truth values as output.
To each logical connective corresponds a truth function. They are defined as follows: To see that this is the case, first, we calculate the truth value of p! Second, we calculate the truth value of: A fact must start with a predicate which is an atom and end with a fullstop. The predicate may be followed by one or more arguments which are enclosed by parentheses. The arguments can be atoms in this ase, these atoms are treated as constants , numbers, variables or lists.
Arguments are separated by commas. If we consider the arguments in a fact to be objects, then the predicate of the fact describes a property of the objects. In a Prolog program, a presence of a fact indicates a statement that is true. An absence of a fact indicates a statement that is not true. It consists of two parts. The first part is similar to a fact a predicate with arguments. The second part consists of other clauses facts or rules which are separated by commas which must all be true for the rule itself to be true.
These two parts are separated by:. You may interpret this operator as if in English. A program describes the relationships of the members in a family father jack, susan. Take Rule 3 as an example. The comma between the two conditions can be considered as a logical-AND operator. You may see that both Rules 1 and 2 start with parent X, Y. When will parent X, Y be true? The answer is any one of these two rules is true. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments.
The word statistics, when referring to the scientific discipline, is singular, as in Statistics is an art. This should not be confused with the word statistic, referring to a quantity such as mean or median calculated from a set of data, whose plural is statistics. Some consider statistics a mathematical body of science that pertains to the collection, analysis, interpretation or explanation, and presentation of data, while others consider it a branch of mathematics concerned with collecting and interpreting data.
Because of its empirical roots and its focus on applications, statistics is usually considered a distinct mathematical science rather than a branch of mathematics. Notes Much of statistics is non-mathematical: Statisticians improve data quality by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting the use of data and statistical models.
Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Statistical consultants can help organizations and companies that dont have in-house expertise relevant to their particular questions. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes.
Thus the term may refer to either biological neural networks, made up of real biological neurons, or artificial neural networks, for solving artificial intelligence problems. Neural networks are a form of multiprocessor computer system, with. A biological neuron may have as many as 10, different inputs, and may send its output the presence or absence of a short-duration spike to many other neurons. Neurons are wired up in a 3-dimensional pattern.
Real brains are orders of magnitude more complex than any artificial neural network so far considered. This combination represents a powerful but complicated solution.
The problem is the data exchange between the different methods. Here potent interfaces must be created.
The KR can not be made to be independent of the underlying knowledge model or knowledge base system KBS. It primarily addresses the growing importance of knowledge management for private sector organizations, but clearly knowledge-generatingorganizations such as federal science management and research agencies can not only benefitfrom this literature but also play a leadership role in furthering theory and practice in this area.
Although these knowledge-oriented organizations have been in the business of creating andfurthering knowledge development, they have not necessarily developed and articulated asystemic approach to knowledge management. This is a critical omission that should becorrected.
Of all the management topics of potential relevance to public science organizations,this may be one of the most useful areas to pursue. Knowledge management is central to publicscience organizations. Although knowledge management has become a highly prominent topic, the term remains rather ambiguous and controversial, impeding progress in articulating what knowledge management entails and what knowledge-based organizations will look like. Caution Many have questioned whether knowledge management is, or will ever become, a useful concept with practical application; others proclaim it is already the pivotal driver of organizational success and will only become more important in the future.
The latter point of view is persuasive, but there is a long way to go in clarifying and articulating the concept of knowledge management. The belief that knowledge management is destined to become the key to future economic success is based on the following logic: Many prominent scholars note that a new economic era, referred to as the knowledgebased economy, is already underway. In this new economy, knowledge is the source of wealth. It is assumed, therefore, that knowledge management will be the new work of organizations.
Knowledge management represents a logical progression beyond information management. Information technologies, at long last, have demonstrated a notable impact on organizational performance. Knowledge management can also be seen as representing a culmination and integration of many earlier organization development ideas e. It encapsulates these concepts into a larger, more holistic perspective that focuses on effectively creating and applying knowledge.
The Mycin system was also used for the diagnosis of blood clotting diseases. Cohen and others. It arose in the laboratory that had created the earlier Dendral expert system. MYCIN was never actually used in practice. This wasnt because of any weakness in its performance. As mentioned, in tests it outperformed members of the Stanford medical school faculty. Some observers raised ethical and legal issues related to the use of computers in medicineif a program gives the wrong diagnosis or recommends the wrong therapy, who should be held responsible?
However, the greatest problem, and the reason that MYCIN was not used in routine practice, was the state of technologies for system integration, especially at the time it was developed. The program ran on a large time-shared system, available over the early Internet ARPANet , before personal computers were developed.
In the modern era, such a system would be integrated with medical record systems, would extract answers to questions from patient databases, and would be much less dependent on physician entry of information. In the s, a session with MYCIN could easily consume 30 minutes or morean unrealistic time commitment for a busy clinician. It involves reinforcing individual responses occurring in a sequence to form a complex behavior.
It is frequently used for training behavioral sequences or chains that are beyond the current repertoire of the learner. The chain of responses is broken down into small steps using task analysis. Parts of a chain are referred to as links. The learners skill level is assessed by an appropriate professional and is then either taught one step at a time while being assisted through the other steps forward or backwards or if the learner already can complete a certain percentage of the steps independently, the remaining steps are all worked on during each trial total task.
A verbal stimulus or prompt is used at the beginning of the teaching trial. In downloading a soda you pull the money out of your pocket and see the money in your hand and then put the money in the machine. Seeing the money in your hand both was the reinforcer for the first response getting money out of pocket and was what prompted you to do the next response putting money in machine. As small chains become mastered, i.
Chaining requires that the teachers present the training skill in the same order each time and is most effective when teachers are delivering the same prompts to the learner. The most common forms of chaining are backward chaining, forward chaining, and total task presentation.
It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.
Their definition is: Software architecture encompasses the set of significant decisions about the organization of a software system including the selection of the structural elements and their interfaces by which the system is composed; behavior as specified in collaboration among those elements; composition of these structural and behavioral elements into larger subsystems; and an architectural style that guides this organization. Software architecture also involves functionality, usability, resilience, performance, reuse, comprehensibility, economic and technology constraints, tradeoffs and aesthetic concerns.
Like any other complex structure, software must be built on a solid foundation. Failing to consider key scenarios, failing to design for common problems, or failing to appreciate the long term consequences of key decisions can put your application at risk. Modern tools and platforms help to simplify the task of building applications, but they do not replace the need to design your application carefully, based on your specific scenarios and requirements.
The risks exposed by poor architecture include software that is unstable, is unable to support existing or future business requirements, or is difficult to deploy or manage in a production environment.
Systems should be designed with consideration for the user, the system the IT infrastructure , and the business goals. For each of these areas, you should outline key scenarios and identify important quality attributes for example, reliability or scalability and key areas of satisfaction and dissatisfaction.
Where possible, develop and consider metrics that measure success in each of these areas. User, Business and System Goals. Tradeoffs are likely, and a balance must often be found between competing requirements across these three areas.
For example, the overall user experience of the solution is very often a function of the business and the IT infrastructure, and changes in one or the other can significantly affect the resulting user experience.
Similarly, changes in the user experience requirements can have significant impact on the business and IT infrastructure requirements. Performance might be a major user and business goal, but the system administrator may not be able to invest in the hardware required to meet that goal percent of the time. A balance point might be to meet the goal only 80 percent of the time.
Architecture focuses on how the major elements and components within an application are used by, or interact with, other major elements and components within the application.
The selection of data structures and algorithms or the implementation details of individual components are design concerns. Architecture and design concerns very often overlap.
Rather than use hard and fast rules to distinguish between architecture and design, it makes sense to combine these two areas. In some cases, decisions are clearly more architectural in nature. In other cases, the decisions are more about design, and how they help you to realize that architecture. The process of storing and retrieving information depends heavily on the representation and organization of the information.
Moreover, the utility of knowledge can also be influenced by how the information is structured. For example, a bus schedule can be represented in the form of a map or a timetable.
On the one hand, a timetable provides quick and easy access to the arrival time for each bus, but does little for finding where a particular stop is situated. On the other hand, a map provides a detailed picture of each bus stops location, but cannot efficiently communicate bus schedules. Both forms of representation are useful, but it is important to select the representation most appropriate for the task at hand. Similarly, knowledge acquisition can be improved by considering the purpose and function of the desired information.
Knowledge Representation and Organization There are numerous theories of how knowledge is represented and organized in the mind, including rule-based production models, distributed networks, and propositional models. However, these theories are all fundamentally based on the concept of semantic networks.
A semantic network is a method of representing knowledge as a system of connections between concepts in memory. Semantic Networks According to semantic network models, knowledge is organized based on meaning, such that semantically related concepts are interconnected.
Knowledge networks are typically represented as diagrams of nodes i. The nodes and links are given numerical weights to represent their strengths in memory. These link strengths are represented here in terms of line width.
Similarly, some nodes in Figure 1 are printed in bold type to represent their strength in memory. Mental excitation, or activation, spreads automatically from one concept to another related concept. These concepts are primed, and thus more easily recognized or retrieved from memory.
For example, in David Meyer and Roger Schvaneveldts study a typical semantic priming study , a series of words e.
A word is more quickly recognized if it follows a semantically related word. This result supports the assumption that se mantically related concepts are more strongly connected than unrelated concepts.
A good knowledge representation covers six basic characteristics: Coverage, which means the KR covers a breadth and depth of information. Without a wide coverage, the KR cannot determine anything or resolve ambiguities.
Understandable by humans. KR is viewed as a natural language, so the logic should flow freely. It should support modularity and hierarchies of classes Polar bears are bears, which are animals. It should also have simple primitives that combine in complex forms. If John closed the door, it can also be interpreted as the door was closed by John. By being consistent, the KR can eliminate redundant or conflicting knowledge.
To gain a better understanding of why these characteristics represent a good knowledge representation, think about how an encyclopedia e. Wikipedia is structured.
There are millions of articles coverage , and they are sorted into categories, content types, and similar topics understandable. It redirects different titles but same content to the same article consistency.
It is efficient, easy to add new pages or update existing ones, and allows users on their mobile phones and desktops to view its knowledge base. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound.
A sound and complete set of rules need not include every rule in the following list, as In logic, a rule of inference, inference. For example, the rule of inference modus ponens takes two premises, one in the form of If p then q and another in the form of p and returns the conclusion q. The rule is valid with respect to the semantics of classical logic as well as the semantics of many other non-classical logics , in the sense that if the premises are true under an interpretation then so is the conclusion.
Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inferences action is purely syntactic, and does not need to preserve any semantic property: Usually only rules that are recursive are important; i.
An example of a rule that is not effective in this sense is the infinitary -rule. It was developed in the early s part of the DARPA knowledge Sharing Effort, which was aimed at developing techniques for building large-scale knowledge bases which are shareable and reusable. While originally conceived of as an interface to knowledge based systems, it was soon repurposed as an Agent communication language.
Decisions and actions in knowledge based systems come from manipulation of the knowledge. The known facts in the knowledge base be located, compared, and altered in some way. This process may set up other subgoals and require further inputs, and so on until a final solution is found. The manipulations are the computational equivalent of reasoning. This requires a form of inference or deduction, using the knowledge and inferring rules. All forms of reasoning requires a certain amount of searching and matching.
The searching and matching operations consume greatest amount of computation time in AI systems. It is important to have techniques that limit the amount of search and matching required to complete any given task. In the original illustrative example, a human. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer; it checks how closely the answer resembles typical human answers.
The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machines ability to render words into audio. The test was introduced by Alan Turing in his paper Computing Machinery and Intelligence, which opens with the words: I propose to consider the question, Can machines think?
Since thinking is difficult to define, Turing chooses to replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. Turings new question is: Are there imaginable digital computers which would do well in the imitation game? This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that machines can think.
KQML, is a language and protocol for communication among software agents and knowledge-based systems. The searching and matching operations consume least amount of computation time in AI systems.
This is often the major obstacle in building an Expert System ES. There are three main topic areas central to knowledge acquisition that require consideration in all ES projects. First, the domain must be evaluated to determine if the type of knowledge in the domain is suitable for an ES.
Second, the source of expertise must be identified and evaluated to ensure that the specific level of knowledge required by the project is provided. Third, if the major source of expertise is a person, the specific knowledge acquisition techniques and participants need to be identified. ES should be heuristic and readily distinguishable from algorithmic programs and databases.
Further, ES should be based on expert knowledge, not just competent or skillful behavior. Domains Several domain features are frequently listed for consideration in determining whether an ES is appropriate for a particular problem domain.
Several of these caveats relate directly to knowledge acquisition. First, bona fide experts, people with generally acknowledge expertise in the domain, must exist. Second, there must be general consensus among experts about the accuracy of solutions in a domain. Third, experts in the domain must be able to communicate the details of their problem solving methods. Fourth, the domain should be narrow and well defined and solutions within the domain must not require common sense.
The object of a knowledge representation is to express knowledge in a computer tractable form, so that it can be used to enable our AI agents to perform well. The syntax of a language defines which configurations of the components of the language constitute valid sentences.
The semantics defines which facts in the world the sentences refer to, and hence the statement about the world that each sentence makes. Representational Adequacy: The ability to represent all the different kinds of knowledge that might be needed in that domain. Inferential Adequacy: The ability to manipulate the representational structures to derive new structures corresponding to new knowledge from existing structures.
Inferential Efficiency: The ability to incorporate additional information into the knowledge structure which can be used to focus the attention of the inference mechanisms in the most promising directions.
Acquisitional Efficiency: The ability to acquire new information easily. Ideally the agent should be able to control its own knowledge acquisition, but direct insertion of information by a knowledge engineer would be acceptable. Finding a system that optimizes these for all possible domains is not going to be feasible. In practice, the theoretical requirements for good knowledge representations can usually be achieved by dealing appropriately with a number of practical requirements: The representations need to be complete so that everything that could possibly need to be represented can easily be represented.
They should make the important objects and relations explicit and accessible so that it is easy to see what is going on, and how the various components interact. They should suppress irrelevant detail so that rarely used details dont introduce unnecessary complications, but are still available when needed.
They should expose any natural constraints so that it is easy to express how one object or relation influences another. The lexical part that determines which symbols or words are used in the representations vocabulary. The structural or syntactic part that describes the constraints on how the symbols can be arranged, i.
The semantic part that establishes a way of associating real world meanings with the representations. The procedural part that specifies the access procedures that enables ways of creating and modifying representations and answering questions using them, i. It is extremely expressive we can express virtually everything in natural language real world situations, pictures, symbols, ideas, emotions, reasoning.
Stimulus-response Learning: Hebb rule if a synapse repeatedly becomes active at about the same time that the postsynaptic neuron fires, changes will take place in the structure or chemistry of the synapse that will strengthen it see Figure In most instances, the above suggestions are considered and modified to suit the particular project. The remainder of this section will describe a range of knowledge acquisition techniques that have been successfully used in the development of ES.
Operational Goals After an evaluation of the problem domain shows that an ES solution is appropriate and feasible, then realistic goals for the project can be formulated. An ESs operational goals should define exactly what level of expertise its final product should be able to deliver, who the expected user is and how the product is to be delivered.
If participants do not have a shared concept of the projects operational goals, knowledge acquisition is hampered. Pre-training Pre-training the knowledge engineer about the domain can be important. In the past, knowledge engineers have often been unfamiliar with the domain.
As a result, the development process was greatly hindered. If a knowledge engineer has limited knowledge of the problem domain, then pre-training in the domain is very important and can significantly boost the early development of the ES.
Knowledge Document Once development begins on the knowledge base, the process should be well documented. In addition to tutorial a document, a knowledge document that succinctly state the projects current knowledge base should be kept. Conventions should be established for the document such as keeping the rules in quasi-English format, using standard domain jargon, giving descriptive names to the rules and including supplementary, explanatory clauses with each rule.
Caution The rules should be grouped into natural subdivisions and the entire document should be kept current. Scenarios An early goal of knowledge acquisition should be the development of a series of well developed scenarios that fully describe the kinds of procedures that the expert goes through in arriving at different solutions.
If reasonably complete case studies do not exist, then one goal of pretraining should be to become so familiar with the domain that the interviewer can compose realistic scenarios. Anecdotal stories that can be developed into scenarios are especially useful because they are often examples of unusual interactions at the edges of the domain. Familiarity with several realistic scenarios can be essential to understanding the expert in early interviews and the key to structuring later interviews.
Finally, they are ultimately necessary for validation of the system. Interviews Experts are usually busy people and interviews held in the experts work environment are likely to be interrupted. To maximize access to the expert and minimize interruptions it can be. Another possibility is to hold meetings after work hours and on weekends. At least initially, audiotape recordings ought to be made of the interviews because often times notes taken during an interview can be incomplete or suggest inconsistencies that can be clarified by listening to the tape.
The knowledge engineer should also be alert to fatigue and limit interviews accordingly. In early interviews, the format should be unstructured in the sense that discussion can take its own course. The knowledge engineer should resist the temptation to impose personal biases on what the expert is saying. During early discussions, experts are often asked to describe the tasks encountered in the domain and to go through example tasks explaining each step.
An alternative or supplemental approach is simply to observe the expert on the job solving problems without interruption or to have the expert talk aloud during performance of a task with or without interruption. These procedures are variations of protocol analysis and are useful only with experts that primarily use verbal thought processes to solve domain problems.
For shorter term projects, initial interviews can be formalized to simplify rapid prototyping. One such technique is a structured interview in which the expert is asked to list the variables considered when making a decision. Next the expert is asked to list possible outcomes solutions from decision making. Finally, the expert is asked to connect variables to one another, solutions to one another and variables to solutions through rules. A second technique is called twenty questions.
With this technique, the knowledge engineer develops several scenarios typical of the domain before the interview. At the beginning of the interview, the expert asks whatever questions are necessary to understand the scenario well enough to determine the solution.