The following technique is called elicitation techniques

Knowledge Acquisition

William P. Wagner, in Encyclopedia of Information Systems, 2003

I. Knowledge Acquisition

The study and development of expert systems is one of the more mature fields within artificial intelligence [AI]. These systems have either been developed completely in-house, purchased as proprietary software, or developed using an expert system shell. The most commonly cited problems in developing these systems are the unavailability of both the experts and the knowledge engineers and difficulties with the rule extraction process. Within the field of AI this has been called the “knowledge acquisition” problem and has been identified as one of the biggest bottlenecks in the expert system development process. Simply stated, the problem is how to efficiently acquire the specific knowledge for a well-defined problem domain from one or more experts and represent it in the appropriate computer format.

Because of what has been called the “paradox of expertise,” experts have often proceduralized their knowledge to the point where they have difficulty in explaining exactly what they know and how they know it. However, new empirical research in the field of expert systems reveals that certain KA techniques are significantly more efficient than others in different KA domains and scenarios. Adelman, then Dhaliwal and Benbasat, and later Holsapple and Wagner identified six sets of determinants of the quality of the resulting knowledge base.

1.

Domain experts

2.

Knowledge engineers

3.

Knowledge representation schemes

4.

Contextual factors

5.

Knowledge elicitation methods

6.

Problem domains.

Each of these determinants has its own body of research and implications for developing knowledge based applications.

Conceptions of KA phenomena are complicated tremendously when we consider all the possible sources and combinations of expertise, the nature of the entity that elicits and structures this expertise, and all the different factors that may have a moderating influence on the process. When all these issues are taken into consideration, it becomes apparent that the classic KA scenario, where a knowledge engineer attempts to elicit and structure the knowledge of a human expert, is a complex phenomenon with respect to the number of parameters involved.

I.A. Knowledge Acquisition Processes

KA is mentioned as a problem within fields as diverse as machine learning, natural language processing, and expert systems research. However, the most typical KA scenario is where a knowledge engineer engages in an unstructured interview with one or more human experts. After these initial sessions, the engineer may use the results of these sessions to develop tools for structuring more in-depth KA sessions, such as focused lists of questions, simulations, or even expert system prototypes to use as the basis for later sessions. Researchers have used a systems analytic approach to modeling the KA process and have arrived at a variety of models that are generally in agreement. The KA process is typically described as being iterative with loops that depend upon the size of the system to be built, the depth and breadth of the task to be supported, and the quality of the knowledge as it is acquired. Steps in the process are identified as:

1.

Conduct an initial unstructured interview.

2.

Knowledge engineer analyzes results of interview.

3.

Knowledge engineer represents the resulting knowledge.

4.

Experts test, and their comments on the system performance are recorded.

5.

If the knowledge base is complete, then stop.

6.

Acquire the missing knowledge from an expert.

7.

Represent the missing knowledge in the knowledge base.

8.

Return to step 4 until finished.

In reality, KA often involves a number of different KA techniques used in multiple KA sessions or episodes with multiple knowledge sources including texts and various human experts. A classic example of how complicated KA can be in reality is the case of how the ExperTAX system was developed by Coopers & Lybrand. This classic case recorded by Shpilberg et al. describes an expert system that was designed to help guide staff accountants out in the field collecting tax information. This case is somewhat unique even today in that it details the overall activities involved in development and is not limited to illustrating specific KA techniques used or novel features of the system.

As is usually the case, the development of ExperTAX began with an unspecified number of unstructured exchanges with multiple experts [20 experts participated overall]. After this, the knowledge engineering team decided to set up a simulated tax information gathering case with three different accountants playing roles. The team videotaped this session and several subsequent variations on it. Other interactive KA sessions were held using a computer to help in the design of the user interface. After these different sessions, a prototype of Exper TAX was developed. This then served as the basis of all the KA sessions that followed with individual experts, leading to significant additions. The whole development process spanned about 2 years and almost 6 person-year's labor. This particular case describes in unusual detail how varied KA can be in reality with respect to the combinations of KA techniques and participants at each stage of the project. Because KA is such a mix of activities, in actual practice it has been difficult for researchers to evaluate how these activities may be best implemented.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122272404001003

GIS Methods and Techniques

Sven Fuhrmann, in Comprehensive Geographic Information Systems, 2018

1.30.3.3 User Requirements

While one focus in the UCD process is on the use of the application, another focus should be on the users. Establishing user requirements is not a simple task, especially if the user group is very diverse [Vredenburg et al., 2002]. The use of context elicitation methods, outlined earlier, generally allow to embed user specific questions, however, there are several methods that provide an additional base to assess and document user requirements. Focus groups are often used to bring a group of potential users together to discuss their requirements for an improved user experience. Focus groups are low-cost and informal qualitative meetings, usually consisting of 6–10 domain-specific users and one moderator. In the course of a session, users are introduced to the topic of interest and provided with prompting questions to start the discussion. The general idea is that the group discusses a topic freely and formulates ideas while the moderator leads the discussion and keeps it focused. Usually a set of predetermined questions is prepared, including some probes to encourage everyone to express different views and to stimulate the discussion between all participants [Krueger, 1998]. Focus groups that consist of individuals who do not know one another are considered to be more vivid and free from hierarchical burden [Monmonier and Gluck, 1994].

Scenarios have also become a popular method to explore future uses of an application and user requirements. Carroll [2000] describes scenarios as “stories about people and their activities.” Usually these stories consist of user-interaction narratives, which are descriptions of what users do and experience as they try to make use of hardware and software. Nielsen [1995] describes scenarios as a description of an individual user who interacts with an application to complete a certain objective. Usually tasks and interactions are described as a sequence of actions and events, representing interactions that a user does. Scenarios, based on the context of use descriptions, can be realistic descriptions of what a user tries to achieve without having to develop a prototype. Rather it allows to describe examples of future use and thus helps providing user requirements. Scenarios are very helpful, especially in the beginning of the UCD process.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124095489096081

Automatic recognition of self-reported and perceived emotions

Biqiao Zhang, Emily Mower Provost, in Multimodal Behavior Analysis in the Wild, 2019

20.4.1 Emotion-elicitation methods

The difficulty of obtaining emotionally expressive data is a well-known problem in the field of automatic emotion recognition. Researchers have adopted various methods that allow for different level of control and naturalness for eliciting emotions, such as acting, emotion-induction using images, music, or videos, and interactions [human–machine interactions, human–human conversations].

One of the earliest emotion-elicitation method was to ask actors to perform using stimuli with fixed lexical content [12,37] or to create posed expressions [50,62,133,134]. One of the advantages of this approach is that researchers have more control over the microphone and camera positions, lexical content, and the distributions of emotions. A criticism of acted or posed emotion is that there are differences between spontaneous and acted emotional displays [19,122].

However, many of these criticisms can be mitigated by changing the manner in which acted data are collected. Supporters have argued that the main problem may not be the use of actors themselves, but the methodologies and materials used in the early acted corpora [15]. They argue that the quality of acted emotion corpora can be enhanced by changing the acting styles used to elicit the data and by making a connection to real-life scenarios, including human interaction. Recent acted emotion datasets have embraced these alternative collection paradigms and have moved towards increased naturalness and subtlety [14,17]. One example is the popular audiovisual emotion corpus, Interactive Emotion Motion Capture [IEMOCAP], which was collected by recruiting trained actors, creating contexts for the actors using a dyadic setting, and allowing for improvisation [14]. The same elicitation strategy has been successfully employed in the collection of the MSP-Improv corpus [17].

Various methods have been proposed for eliciting spontaneous emotional behavior. One popular approach is the passive emotion-induction method [54,58,100,111,121,129,131]. The goal is to cause a participant to feel a given emotion using selected images, music, short videos, or movie clips known a priori to induce certain emotions. This method is more suitable for analysis of modalities aside from audio, because the elicitation process often does not involve active verbal responses from the participants. A more active version is induction by image/music/video using emotion-induction events. For example, Merkx et al. used a multi-player first-person shooter computer game for eliciting emotional behaviors [69]. The multi-player setting of the game encouraged conversation between players, and the game was embedded with emotion-inducing events.

Another method often adopted by audiovisual emotion corpora is emotion-elicitation through interaction, including human–computer interaction and human–human interaction. For example, participants were asked to interact with a dialog system [57,59]. The FAU-Aibo dataset collected the interactions between children and a robot dog [6]. The Sensitive Artificial Listener [SAL] portion of the SEMAINE dataset used four agents with different characters played by human operators for eliciting different emotional responses from the participants [67]. Examples of human–human interaction includes interviews of topics related to personal experience [91] and collaborative tasks [88].

An emerging trend is that researchers are moving emotional data collections outside of laboratory settings, a paradigm referred to as in the wild. For example, the Acted Facial Expressions In The Wild [AFEW] dataset selected video clips from movies [26], the Vera am Mittag [VAM] dataset [40] and the Belfast Naturalistic dataset [110] used recordings from TV chat shows. User-generated content on social networks has become another source for creating text-emotion datasets. Researchers curate the contents published on social networks, such as blogs, Twitter, and Facebook for emotion detection [86,87,89].

Finally, researchers use combinations of known “best practices” for collecting behaviors associated with different emotions. For example, in the BP4D-Spontaneous dataset, the spontaneous emotion of the participants was using a series of tasks, including interview, video-clip viewing and discussion, startle probe, improvisation, threat, cold pressor, insult, and smell [139]. These tasks were intended to induce happiness or amusement, sadness, surprise or startle, embarrassment, fear or nervous, physical pain, anger or upset, and disgust, respectively.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128146019000274

Elicitation of Probabilities and Probability Distributions

L.J. Wolfson, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Eliciting Prior Distributions

Eliciting prior distributions entails quantifying the subjective beliefs of an individual into a statistical distribution [see Distributions, Statistical: Approximations]. Kadane and Wolfson [1998] discuss the distinction between structural and predictive methods of doing this. In structural methods, an expert is asked to directly assess parameters or moments of the prior distribution; in predictive methods, elicitees are asked questions about aspects of the prior predictive distribution; the predictive distribution is

fY[y|x]=∫ΘfY[y|θ]fΘ[θ|x]dθ

and when no data has been observed [i.e., x={Ø}], this is known as the prior predictive distribution. In most complex problems, the predictive method is preferable, since it is more suited to the prescription for good elicitation methods outlined in Sect. 2. The functional form of the prior distribution is usually selected to be conjugate to the likelihood chosen to model the data.

The elicitation is generally executed by asking assessors to identify quantile-probability pairs [q, p] in some sense, where p=PX[X

Chủ Đề