The following technique is called elicitation techniques

Knowledge Acquisition

William P. Wagner, in Encyclopedia of Information Systems, 2003

I. Knowledge Acquisition

The study and development of expert systems is one of the more mature fields within artificial intelligence (AI). These systems have either been developed completely in-house, purchased as proprietary software, or developed using an expert system shell. The most commonly cited problems in developing these systems are the unavailability of both the experts and the knowledge engineers and difficulties with the rule extraction process. Within the field of AI this has been called the “knowledge acquisition” problem and has been identified as one of the biggest bottlenecks in the expert system development process. Simply stated, the problem is how to efficiently acquire the specific knowledge for a well-defined problem domain from one or more experts and represent it in the appropriate computer format.

Because of what has been called the “paradox of expertise,” experts have often proceduralized their knowledge to the point where they have difficulty in explaining exactly what they know and how they know it. However, new empirical research in the field of expert systems reveals that certain KA techniques are significantly more efficient than others in different KA domains and scenarios. Adelman, then Dhaliwal and Benbasat, and later Holsapple and Wagner identified six sets of determinants of the quality of the resulting knowledge base.

1.

Domain experts

2.

Knowledge engineers

3.

Knowledge representation schemes

4.

Contextual factors

5.

Knowledge elicitation methods

6.

Problem domains.

Each of these determinants has its own body of research and implications for developing knowledge based applications.

Conceptions of KA phenomena are complicated tremendously when we consider all the possible sources and combinations of expertise, the nature of the entity that elicits and structures this expertise, and all the different factors that may have a moderating influence on the process. When all these issues are taken into consideration, it becomes apparent that the classic KA scenario, where a knowledge engineer attempts to elicit and structure the knowledge of a human expert, is a complex phenomenon with respect to the number of parameters involved.

I.A. Knowledge Acquisition Processes

KA is mentioned as a problem within fields as diverse as machine learning, natural language processing, and expert systems research. However, the most typical KA scenario is where a knowledge engineer engages in an unstructured interview with one or more human experts. After these initial sessions, the engineer may use the results of these sessions to develop tools for structuring more in-depth KA sessions, such as focused lists of questions, simulations, or even expert system prototypes to use as the basis for later sessions. Researchers have used a systems analytic approach to modeling the KA process and have arrived at a variety of models that are generally in agreement. The KA process is typically described as being iterative with loops that depend upon the size of the system to be built, the depth and breadth of the task to be supported, and the quality of the knowledge as it is acquired. Steps in the process are identified as:

1.

Conduct an initial unstructured interview.

2.

Knowledge engineer analyzes results of interview.

3.

Knowledge engineer represents the resulting knowledge.

4.

Experts test, and their comments on the system performance are recorded.

5.

If the knowledge base is complete, then stop.

6.

Acquire the missing knowledge from an expert.

7.

Represent the missing knowledge in the knowledge base.

8.

Return to step 4 until finished.

In reality, KA often involves a number of different KA techniques used in multiple KA sessions or episodes with multiple knowledge sources including texts and various human experts. A classic example of how complicated KA can be in reality is the case of how the ExperTAX system was developed by Coopers & Lybrand. This classic case recorded by Shpilberg et al. describes an expert system that was designed to help guide staff accountants out in the field collecting tax information. This case is somewhat unique even today in that it details the overall activities involved in development and is not limited to illustrating specific KA techniques used or novel features of the system.

As is usually the case, the development of ExperTAX began with an unspecified number of unstructured exchanges with multiple experts (20 experts participated overall). After this, the knowledge engineering team decided to set up a simulated tax information gathering case with three different accountants playing roles. The team videotaped this session and several subsequent variations on it. Other interactive KA sessions were held using a computer to help in the design of the user interface. After these different sessions, a prototype of Exper TAX was developed. This then served as the basis of all the KA sessions that followed with individual experts, leading to significant additions. The whole development process spanned about 2 years and almost 6 person-year's labor. This particular case describes in unusual detail how varied KA can be in reality with respect to the combinations of KA techniques and participants at each stage of the project. Because KA is such a mix of activities, in actual practice it has been difficult for researchers to evaluate how these activities may be best implemented.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001003

GIS Methods and Techniques

Sven Fuhrmann, in Comprehensive Geographic Information Systems, 2018

1.30.3.3 User Requirements

While one focus in the UCD process is on the use of the application, another focus should be on the users. Establishing user requirements is not a simple task, especially if the user group is very diverse (Vredenburg et al., 2002). The use of context elicitation methods, outlined earlier, generally allow to embed user specific questions, however, there are several methods that provide an additional base to assess and document user requirements. Focus groups are often used to bring a group of potential users together to discuss their requirements for an improved user experience. Focus groups are low-cost and informal qualitative meetings, usually consisting of 6–10 domain-specific users and one moderator. In the course of a session, users are introduced to the topic of interest and provided with prompting questions to start the discussion. The general idea is that the group discusses a topic freely and formulates ideas while the moderator leads the discussion and keeps it focused. Usually a set of predetermined questions is prepared, including some probes to encourage everyone to express different views and to stimulate the discussion between all participants (Krueger, 1998). Focus groups that consist of individuals who do not know one another are considered to be more vivid and free from hierarchical burden (Monmonier and Gluck, 1994).

Scenarios have also become a popular method to explore future uses of an application and user requirements. Carroll (2000) describes scenarios as “stories about people and their activities.” Usually these stories consist of user-interaction narratives, which are descriptions of what users do and experience as they try to make use of hardware and software. Nielsen (1995) describes scenarios as a description of an individual user who interacts with an application to complete a certain objective. Usually tasks and interactions are described as a sequence of actions and events, representing interactions that a user does. Scenarios, based on the context of use descriptions, can be realistic descriptions of what a user tries to achieve without having to develop a prototype. Rather it allows to describe examples of future use and thus helps providing user requirements. Scenarios are very helpful, especially in the beginning of the UCD process.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124095489096081

Automatic recognition of self-reported and perceived emotions

Biqiao Zhang, Emily Mower Provost, in Multimodal Behavior Analysis in the Wild, 2019

20.4.1 Emotion-elicitation methods

The difficulty of obtaining emotionally expressive data is a well-known problem in the field of automatic emotion recognition. Researchers have adopted various methods that allow for different level of control and naturalness for eliciting emotions, such as acting, emotion-induction using images, music, or videos, and interactions (human–machine interactions, human–human conversations).

One of the earliest emotion-elicitation method was to ask actors to perform using stimuli with fixed lexical content [12,37] or to create posed expressions [50,62,133,134]. One of the advantages of this approach is that researchers have more control over the microphone and camera positions, lexical content, and the distributions of emotions. A criticism of acted or posed emotion is that there are differences between spontaneous and acted emotional displays [19,122].

However, many of these criticisms can be mitigated by changing the manner in which acted data are collected. Supporters have argued that the main problem may not be the use of actors themselves, but the methodologies and materials used in the early acted corpora [15]. They argue that the quality of acted emotion corpora can be enhanced by changing the acting styles used to elicit the data and by making a connection to real-life scenarios, including human interaction. Recent acted emotion datasets have embraced these alternative collection paradigms and have moved towards increased naturalness and subtlety [14,17]. One example is the popular audiovisual emotion corpus, Interactive Emotion Motion Capture (IEMOCAP), which was collected by recruiting trained actors, creating contexts for the actors using a dyadic setting, and allowing for improvisation [14]. The same elicitation strategy has been successfully employed in the collection of the MSP-Improv corpus [17].

Various methods have been proposed for eliciting spontaneous emotional behavior. One popular approach is the passive emotion-induction method [54,58,100,111,121,129,131]. The goal is to cause a participant to feel a given emotion using selected images, music, short videos, or movie clips known a priori to induce certain emotions. This method is more suitable for analysis of modalities aside from audio, because the elicitation process often does not involve active verbal responses from the participants. A more active version is induction by image/music/video using emotion-induction events. For example, Merkx et al. used a multi-player first-person shooter computer game for eliciting emotional behaviors [69]. The multi-player setting of the game encouraged conversation between players, and the game was embedded with emotion-inducing events.

Another method often adopted by audiovisual emotion corpora is emotion-elicitation through interaction, including human–computer interaction and human–human interaction. For example, participants were asked to interact with a dialog system [57,59]. The FAU-Aibo dataset collected the interactions between children and a robot dog [6]. The Sensitive Artificial Listener (SAL) portion of the SEMAINE dataset used four agents with different characters played by human operators for eliciting different emotional responses from the participants [67]. Examples of human–human interaction includes interviews of topics related to personal experience [91] and collaborative tasks [88].

An emerging trend is that researchers are moving emotional data collections outside of laboratory settings, a paradigm referred to as in the wild. For example, the Acted Facial Expressions In The Wild (AFEW) dataset selected video clips from movies [26], the Vera am Mittag (VAM) dataset [40] and the Belfast Naturalistic dataset [110] used recordings from TV chat shows. User-generated content on social networks has become another source for creating text-emotion datasets. Researchers curate the contents published on social networks, such as blogs, Twitter, and Facebook for emotion detection [86,87,89].

Finally, researchers use combinations of known “best practices” for collecting behaviors associated with different emotions. For example, in the BP4D-Spontaneous dataset, the spontaneous emotion of the participants was using a series of tasks, including interview, video-clip viewing and discussion, startle probe, improvisation, threat, cold pressor, insult, and smell [139]. These tasks were intended to induce happiness or amusement, sadness, surprise or startle, embarrassment, fear or nervous, physical pain, anger or upset, and disgust, respectively.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128146019000274

Elicitation of Probabilities and Probability Distributions

L.J. Wolfson, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Eliciting Prior Distributions

Eliciting prior distributions entails quantifying the subjective beliefs of an individual into a statistical distribution (see Distributions, Statistical: Approximations). Kadane and Wolfson (1998) discuss the distinction between structural and predictive methods of doing this. In structural methods, an expert is asked to directly assess parameters or moments of the prior distribution; in predictive methods, elicitees are asked questions about aspects of the prior predictive distribution; the predictive distribution is

fY(y|x)=∫ΘfY(y|θ)fΘ(θ|x)dθ

and when no data has been observed (i.e., x={Ø}), this is known as the prior predictive distribution. In most complex problems, the predictive method is preferable, since it is more suited to the prescription for good elicitation methods outlined in Sect. 2. The functional form of the prior distribution is usually selected to be conjugate to the likelihood chosen to model the data.

The elicitation is generally executed by asking assessors to identify quantile-probability pairs (q, p) in some sense, where p=PX(X

To illustrate, consider various ways in which a prior distribution for the probability parameter in a binomial model might be elicited. The beta distribution is the usual choice for modeling the prior distribution, and functionally, the model is

fp(p|α, β)˜Beta(a,β)Prior fX(x|p,n)˜Bi nomial(n,p)LikelihoodfP (p|α,β,x,n)˜Beta(α+x,β+n−x)Posterior

In this instance, x is the number of successes observed in n trials; from the way in which the prior parameters are updated by the posterior distribution, α+β can be viewed as a ‘prior sample size’ or measure of prior strength, and α/(α+β) as a point estimate of the mean prior probability of a success. So one way to structurally elicit a prior would be to ask an elicitee to specify a prior sample size and probability of success, and this would uniquely define the parameters of the prior distribution. This, however, does not provide any feedback to the elicitee, and will almost surely be overconfident because of the way in which the variability of the distribution is never elicited directly. It also does not address the issue of the skewness of the distribution. Chaloner and Duncan (1983) propose a predictive method for eliciting this same prior distribution, where the elicitee is asked to specify the modal number of successes for given prior samples sizes, and then is asked several questions about the relative probability of observing numbers of successes greater than or less than the specified mode. The elicitee is also shown (graphically) the implied predictive distributions for future observations specified by their current answers, and then walked through a bisection algorithm to adjust the spread of the distribution.

While the predictive method just described is more difficult to implement, it is also more appropriate in terms of having the elicitee answer questions that will effectively allow them to communicate both their knowledge and their uncertainty within the constraints of the model. To illustrate, suppose that in the structural method, an elicitee specified a prior sample size of 15 and a prior mean of 1/3. In the predictive method, the same elicitee specified, for a sample size of 15, a predictive mode of five successes; the odds of observing four rather than five successes was specified as 0.8, and the odds of observing six rather than five success was specified as 0.7. The two priors, in the structural case, a beta (5,10) distribution, and in the predictive case, a beta (9.67,18.74), are shown in Fig. 1. This illustrates that way in which the predictive method allows for better representation of the actual state of knowledge the elicitee has.

The following technique is called elicitation techniques

Figure 1. Structural and predictive elicited distributions for the beta-binomial model

Existing protocols for eliciting prior distributions are described in some detail in Kadane and Wolfson (1998). Elicitation protocols are described as being either general or application-specific. General methods are those that are meant for specific classes of models, such as the beta-binomial, the normal linear regression model, the dirichlet-multinomial, and others. Application-specific methods are ones tailored to the problem under study; several examples of these are described both in Kadane and Wolfson (1998) and French and Smith (1997). Application-specific methods are used typically for complex situations, where there may be no conjugate or standard prior that is suitable, or where the elicitation process must be decomposed in order to provide sensible assessments.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767004125

The Design Implications of Users’ Values for Software and System Architecture

Alistair Sutcliffe, Sarah Thew, in Economics-Driven Software Architecture, 2014

13.8 Conclusions

In this chapter we have argued that user values need to be added as important constructs in requirements analysis, which can augment the design of software and system architecture with a user orientation. Values can be held by individual user agents, roles, or organizational entities and have appeared in other software development approaches such as soft systems methods (Checkland, 1999) where the organization’s values are recorded under the world view (Weltanschauung) in the CATWOE mnemonic. Ethics were the focus of Mumford’s (1981) approach to systems analysis. Values are implicit in many requirements analysis methods, for example in sociopolitical issues recorded in the Volere template (Robertson and Robertson, 1999) and in requirement elicitation methods (Gause and Weinberg, 1989). Values therefore need to be systematically readopted into software development approaches. Taxonomies of values have been applied in RE (Ramos and Berry, 2005a; Thew et al., 2008), whereas this chapter has reported the application of the value-based RE method (Thew and Sutcliffe, 2008) for analyzing user values and motivations in showing the consequent decisions for the system architecture.

Values are related to goal-oriented approaches to architecture development (e.g., Chung et al., 2011; Bass and Clements, 2011); however, values have a less direct impact on architecture than functional requirements or architecture attributes/nonfunctional requirements. Instead, the implications of stakeholders have to be transformed into architectural components and heuristics to address design issues such as autonomy, coupling, and control via methods, as illustrated with VBRE described in this chapter. The closest relative of value analysis is business goals (Bass and Clements, 2011), where business goals such as “responsibility for customers or to the origanisation and society” map to trust and possibly morals/ethical values in our taxonomy. Elaborating the intersections of business goals, organizational, and brand values with architectural implications is an area that deserves further research.

The value-oriented architecture patterns described in this chapter could inform design choices in many commercial system architectures such as product lines (Pohl et al., 2005) and enterprise resource plans (Keller and Teufel, 1998; Moon, 2007). ERPs have often encountered user resistance and rejection because user motivations, emotions, and cultural values have not been considered (Ramos and Berry, 2005b; Krumbholz et al., 2002). Another potential application of value analysis and related patterns is orchestration in service-oriented architectures (Endrei et al., 2004) where a user-centered approach is necessary for composition of transaction processing and information processing in business systems. The role of the value-based architecture framework is to bridge the gap from requirements and stakeholder views to design, so the value patterns could be synthesized with architectural strategies (Bass et al., 2003) and tactics (Bachman et al., 2003) to inform structural design as well as highlighting application domain concerns. The heuristics and metrics of design architecture strategies could be informed by value analysis to improve trade-off analysis in architecture design as well as shaping metrics for monitoring conformity with quality criteria and NFRs. The VBRE method should scale to larger systems since the process and taxonomy can be applied to subsystems in a divide and conquer approach. Further work is necessary, however, to specify the links between values and architecture patterns at the application level.

The value-based architecture framework described in this chapter did stand the test of application in design in the ADVISES project, contributing to several design decisions. Furthermore, it was produced by a systematic process for bridging user views in requirements analysis to architecture design. Value analysis coupled with design implications and rational knowledge contained in the patterns offers a new approach to architecture design. Value clashes between stakeholders encountered in the case study were solved by a configurable architecture. We believe that this approach will generalize, for example, configuration to add modules for visibility of users’ action to match sociability and trust values. However, configuration has a cost, so it may be advisable to resolve some value conflicts where conflicts cannot be easily addressed in design (e.g., ethical requirements for transparency in the use of information and security–privacy values).

Although the VBRE approach to architecture has made a promising start, application in one case study does not provide sufficient evidence to validate the approach. Further research is necessary to apply the value-based framework in more wide-ranging domains where users’ views are important (e.g., adaptation in training and educational applications). In addition, the framework needs more detail to be added in the patterns for modeling value implications as generic requirements and design issues, as well as linking these to architectural tactics and strategies.

The worth of the value-oriented conceptual-level architectures may be realized in several ways. First is by the route described in this chapter as a “tool for thought” and a source of design knowledge that can be applied to designing specific applications as well as generic systems. The value analysis method and architecture patterns could complement existing architecture design methods such as BAPO (Van der Linden et al., 2004), ADD (Bass et al., 2003), or RUP 4+ (Kruchten, 2003; Shuja and Krebs, 2007). In their review of architecture-driven design methods, Hofmeister et al. (2007) note that user views are common concerns in most methods, and they argue for “architecture-significant requirements” as an important component of the development process. Values contribute such requirements, and the patterns encapsulate reusable knowledge to inform attributes in attribute-driven design (Bass et al., 2003) or use case viewpoints/perspectives in RUP-based (Krutchen, 2003) approaches. Although user-oriented values focus on individuals or user groups, most values also apply to organizations, thereby providing content for the business and organization views in the BAPO framework. The second route for the value patterns architecture is to integrate them with reference frameworks of conceptual models for software development such as the domain theory (Sutcliffe, 2002) or analysis patterns (Fowler, 1997), either to provide reusable knowledge directly for architecture design or to index open-source components to facilitate matching application requirements to appropriate architecture components. Open-source components are currently classified by application domains (Wikipedia, 2013; SourceForge, 2012), which can hinder reuse as many components are specialized within domains. Indexing could be extended to using conceptual architectures to reengineer components for more general and wide-ranging reuse. Requirements patterns (Withall, 2007) have a similar motivation to the value-oriented architecture patterns, with related components in groupings for flexibility (configure), user function, and access control. However, requirements patterns are documented experiences that are neither motivated by a requirements analysis process nor based on a systematic taxonomy of user views or values. The third route may be to implement conceptual architectures as configurable application generators (Sutcliffe, 2008), so that the architecture drives an end-user design dialogue for acquiring sufficient detail to generate specific applications.

The connection between values and NFRs via architecture shows that many stakeholder values share common architectural concerns. However, stakeholders’ values may well clash, so design is inevitably a matter of trade-off analysis and reconciliation of conflicts. For example, the need for access controls for security may well conflict with a mode/ethics value for transparency and open access. The trade-off may be to use the configuration pattern and produce different versions of an architecture to suit stakeholder groups. Unfortunately, this might then trigger a clash with cost and maintainability NFRs. The space of possible intervalue and value-NFRs clashes is extensive and requires considerable further research before trade=off guidelines can be given.

While the focus of this chapter has been on user values (Friedman, 2008) rather than on value in an economic sense (Gordijn and Akkermans, 2002), it is worth reflecting on the economic implications of user values. For example, developing multiuser CSCW features to satisfy a sociability value or end-user development for creativity both incur considerable costs beyond the basic functional system architecture. Aesthetics values will necessitate additional user interface development resources, while privacy and trust require development of additional components to protect access or to make information visible. Economic considerations involve further trade-offs, as noted above. The configuration architectural pattern can be invoked to deal with value conflicts, but only at economic penalty of producing and maintaining different versions of a software architecture.

Finally, the value-based approach to architecture design and the patterns proposed in this chapter are an initial proposal that needs to be applied in further case studies and developed to create a library of more detailed patterns, since one of the lessons apparent from their application was the extent of architecture design that was informed by domain-specific analysis. The growing interest in service patterns and architectures may provide a useful means of advancing research in such a way as to narrow the gap between user- and developer-oriented views of system architecture by bringing values and social issues to bear on design decisions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124104648000131

Taking stock of behavioural OR: A review of behavioural studies with an intervention focus

L. Alberto Franco, ... Ilkka Leppänen, in European Journal of Operational Research, 2021

6.3 Convergence and consistency in preference and priority elicitations

As already noted, the observed instability of priorities and preferences associated with the use of different weight elicitation methods has been studied extensively. Given that these empirical observations are still not fully explained, this continues to be a fruitful area for future work. In particular, studies such as the one by Lahtinen and Hämäläinen (2016) shed light on the impact that different ways of deploying elicitation methods have on priorities and preferences.

It should also be noted that the differences in weights, rankings, or choices produced by elicitation methods take second place when compared with holistic judgments made before using the methods: there is increasing experimental evidence that people are able to reconcile the discrepancies between their pre-method judgments with those produced by the methods, which in turn increases their confidence in the results (Ishizaka & Siraj, 2018; Ishizaka, Balkenborg, & Kaplan, 2011). However, post-decisional confidence is a well-known bias (Sniezek, Paese, & Switzer, 1990) and thus further research can help clarify its role in the context of using weight elicitation procedures.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0377221720309772

Towards an automatic early stress recognition system for office environments based on multimodal measurements: A review

Ane Alberdi, ... Adrian Basarab, in Journal of Biomedical Informatics, 2016

4 Psychological stress elicitation

In order to carry out stress-related researches, it is necessary to provoke the stress response on the desired subject at the required moment. For this purpose, many different stress elicitation methods have been validated. Probably the most frequently used method has been the Stroop Color-Word Interference Test, followed by mental arithmetic tasks. Both methods have been frequently studied [43]. Car driving is also considered a stress eliciting task [50], as well as watching films with stressful content [14,131], or playing computer games. In a research [77], speed and difficulty of the game “Tetris” have been varied in order to provoke stress and calm reactions alternately on subjects. Public speaking tasks [79,132] and the Cold pressor test [133] have also been used. Finally, Dedovic et al. [102] have tested and validated fMRI as a stress eliciting method.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1532046415002750

Expert judgement for dependence in probabilistic modelling: A systematic literature review and future research directions

Christoph Werner, ... Oswaldo Morales-Nápoles, in European Journal of Operational Research, 2017

4.2.1 Regression models

A common dependence model in context (b) is a regression model. For recent overviews, see Ryan (2008) and Weisberg (2005).

Recall that here information on the dependence is modelled indirectly by restructuring the natural input. Technically restructuring is done using variable transformation techniques. Beliefs about parameters are then elicited while being formulated as univariate query variables. Similar to quantifying parametric multivariate distributions, elicitation here is typically done for prior beliefs in a Bayesian methodology.

The parameter of interest is a regression coefficient, β. The likelihood function p(Y|X, β) relates observed data Y to regression coefficients β and covariates X. Experts then specify the prior distribution for p(β) typically through hyperparameters which are the mean and the variance of the regression coefficient (James, Choy, & Mengersen, 2010). Eliciting moments of regression coefficients directly however might be cognitively too complex given that experts would need to understand the effect that a change of covariate X has on Y. Therefore, the literature on eliciting priors for regression models proposes indirect approaches. For these, experts provide a probability of the response value based on specified values of the explanatory variables or vice versa. From this, prior elicitation methods for linear models, normal (Kadane et al., 1980) and multiple (Garthwaite & Dickey, 1991), piecewise-linear (Garthwaite, Al-Awadhi, Elfadaly, & Jenkinson, 2013) as well as logistic regression models (O’Leary et al., 2009) have been developed. For the latter, experts typically assess conditional means, E(Y|X, β) (Bedrick, Christensen, & Johnson, 1996; James et al., 2010) for a probability of presence, pi, with binary responses for observation i modelled as logit(pi)=β0+β1xi,1+β2xi,2+...+βjxi,j+ϵi (O’Leary et al., 2009). For instance, Choy, O’Leary, and Mengersen (2009) elicit the probability of presence for a certain wallaby type at a specified location with fixed habitat characteristics in habitat modelling. Depending on distributional assumptions for the probability of presence (such as a Beta distribution) the mode rather than an arithmetic average or median might be elicited due to the potential skewness of the distribution.

In a similar manner, parameters can be elicited for (multiple) linear regression models. Garthwaite and Dickey (1991) propose a model of the form:

E(Y|x1,x2,...,xi)=(β1x1,β2x2,...,βixi)

where again β denotes the regression coefficient and E(Y|x1, x2, ..., xi) is the expected (average) value of Y when X1=x1,X2=x2,...,Xi=xi. Experts then specify the prior distribution of β by assessing hyperparameters. To do so, the authors introduce design points, values at which a prediction is made after hypothetical data are given. Likewise, Kadane et al. (1980) elicit fractiles for a predictive distribution with specified values at design points, using a bisection method (see Appendix B).

Regression elicitation is further explored in Choy et al. (2009), O’Leary et al. (2009) and Al-Awadhi and Garthwaite (2006). O’Leary et al. (2009) present three different elicitation methods with graphical support, similarly to Al-Awadhi and Garthwaite (2006) who use an interactive graphics method as well. Empirical studies for expert judgement in regression modelling are mainly found in the area of ecology for which e.g. Choy et al. (2009) summarise various approaches.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0377221716308517

Software development in startup companies: A systematic mapping study

Nicolò Paternoster, ... Pekka Abrahamsson, in Information and Software Technology, 2014

7.2 Software development practices

We have categorized work practices related to software development as illustrated in Table 9, discussing them individually.

Table 9. Software development practices

Requirements engineering (Section 7.2.1) 21
Design and architecture (Section 7.2.2) 32
Implementation, maintenance and deployment (Section 7.2.3) 14
Quality assurance (Section 7.2.4) 23
Sum 90

7.2.1 Requirements engineering practices

Establishing an engineering process for collecting, defining and managing requirements in the startup context is challenging. RE practices are often reduced to some key basic activities [82,8]. Su-Chan et al. [81] report on efforts in defining the value-proposition that the company aims to achieve at the very first stage of the project.

Initially, as startups often produce software for a growing target market [31,19], customers and final users are usually not well-known and requirements are therefore market-driven [90] rather than customer-specific. In such a context Mater and Subramanian [68] attest severe difficulties in eliciting and detailing the specifications of both functional and non-functional requirements. Moreover, in unexplored and innovative markets, the already poorly-defined requirements tend to change very rapidly. This makes it hard for the development team to maintain requirements and keep them consistent over time.

Several authors acknowledge the importance of involving the customer/user in the process of eliciting and prioritizing requirements according to their primary needs [92,90,74,8,95]. However, the market-driven nature of the requirements demands for alternatives. For example, startups can use scenarios in order to be able to identify requirements in the form of user stories [67] and estimate the effort for each story [82]. In scenarios and in similar product-usage simulations, an imaginary customer can be represented by an internal member of the company [80]. An example of a more strict customer-development process [105] that drives the identification of requirements can be found in an experience report by Taipale [93].

7.2.1.1 Discussion

Polishing requirements that address an unsolicited need is waste. To demonstrate problem/solution fit it is required to discover the real needs of your first customers, testing business speculations only by defining a minimal set of functional requirements. In the future, developing a deep customer collaboration, such as the customer development process [21] will change the requirements elicitation methods, moving towards testing the problem and understand if the solution fits to real needs even before the product goes to the market.

7.2.2 Design and architecture practices

Deias and Mugheddu [63] observed a general lack of written architecture and design specifications in startups. Development in startups is usually guided by simple principles, trying to avoid architectural constraints that are difficult to overcome as the product and user-base grows. Tinglings [80] results suggest that a not well analyzed architecture causes problems in the long run. However, a good-enough architecture analysis should at least identify the decisions that might cause problems before obtaining product revenue, and which can be fixed at a later point in time, accounting for increased resources after revenue cash starts to flow [92].

One common analysis on determining requirements is the planning game, where developers can arbitrate the cost of desired features and delivered functionalities. However business people can “steer” the process and impact adopted architectural decisions, which could present obstacles for refactoring activities, especially if the software complexity grows [91]. Then the use of design patterns [106], e.g. the model-view-controller [107], can provide advantages to startups which need flexibility in refactoring the product. Moreover, formulating initial architectural strategies with high-level models and code-reuse from industry standards represents a viable trade-off between big up-front and ad hoc design [73].

Jansen et al. [77] suggest that startups should take advantage of existing components, reducing thereby time-to-market. Leveraging on frameworks and code-reuse of third party components reinforces the architectural structure of the product and the ability to scale with the company size. As reported by Yoffie [76], scalability represents the most important architectural aspect in startups and should be addressed as soon as possible. Then, startups can benefit from reusing components and shared architectures across projects as well as existing high-level frameworks and modeling practices.

Summarizing, design and architectural practices reported to be useful in startups are:

The use of well-known frameworks able to provide fast changeability of the product in its refactoring activities.

The use of existing components, leveraging third party code reinforcing the product ability to scale.

7.2.2.1 Discussion

Despite the general lack of written architecture specifications in startups, the difficulties presented when the user-base and product complexity grows can be overtaken by a little upfront effort on framework selection and analyzing decisions that might cause problems before obtaining product revenue. When the product evolves, the use of architecture and design to make features modular and independent are crucial to remove or change functionalities. Therefore, employing architectural practices and frameworks that enable easy extension of the design (e.g. pluggable architecture where features can be added and removed as plugins [108]) can better align the product to the uncertainty of the market needs.

7.2.3 Implementation, maintenance and deployment practices

Silva and Kon [67] report on positive results from pairing up senior and junior developers together in pair-programming sessions. During these sessions, developers also made use of coding standards to help cross-team code readability and reduce the risks related to the high-rate of developer turnover. In a different study, Tingling [80] attested an initial high resistance to the introduction of coding standards and pair-programming. These practices were then adopted only in later stages, when the complexity of the project required them. Zettel et al. [82] report that the practice of tracking traditional code metrics has been labeled as “obsolete and irrelevant” and that many companies use their ad hoc methods of assessing processes and metrics. The software team studied by Zettel et al. [82] had a bug-fix process centered around their issue-tracking tool and relied on a release system. Several authors reported on advantages of constant code refactoring: ensuring readability and understandability of the code [82], improving modularity [80] and providing discipline to developers [93]. On the other hand, introducing refactoring may cause problems since developers had no or little experience [63] and they did not see immediate value in introducing high level abstractions [67]. In the case study described by Ambler [91], an initial lack of refactoring led to the need of re-implementing the whole system after the number of users had grown drastically. Finally some authors reported work practices related to deployment claiming that some software teams deploy manually the code on the infrastructure [67] while others rely on continuous integration tools [93].

7.2.3.1 Discussion

Startups tend to start the code implementation with an informal style, introducing standards only when the project size requires them. This is in line with the observations made by Thorpe et al. [49] on knowledge management and growth in SMEs. The process is often driven by lightweight ad hoc systems: bug-tracking, simple code metrics and pair-programming sessions. In this regard, startups in the early stage keep the code base small and simple to develop only what is currently needed, maintaining focus on core functionalities to validate with first customers. As the business goal drives the need of effort in refactoring and implementation, more studies will be needed to align business with execution of concrete development practices in startup contexts (e.g. GQM Strategies [109]).

7.2.4 Quality assurance practices

Testing software is costly in terms of calendar time and often compromised in startups [19,82]. Quality assurance, in the broader sense, is largely absent because of the weak methodological management, discussed in Section 7.1. The complex task of implementing test practices is further hindered by the lack of team experience [67,68].

However, usability tests are important to achieve product/market fit [83]. Ongoing customer acceptance ensures that features are provided with tests, which can effectively attest the fitness of the product to the market [80,92,31]. Mater and Subramanian [68] suggest to use a small group of early adopters or their proxies as quality assurance fit team. Furthermore, users can be an important means to judge whether the system needs more tests [82]. Outsourcing quality assurance to external software testing experts, handling the independent validation work if resources are not available [68], can also be an alternative.

Summarizing, quality assurance practices, reported to be useful in startups, are:

The use of ongoing customer acceptance with the use of a focus groups of early adopters, which targets to attest the fitness of the product to the market.

Outsourcing tests if necessary, to maintain the focus on the development of core functionalities.

7.2.4.1 Discussion

Even though testing software is costly, acceptance tests are the only practice to validate the effectiveness of the product in uncertain market conditions [31]. Therefore, providing time-efficient means to develop, execute and maintain acceptance tests is the key to improve quality assurance in startups [68]. In our opinion, startups will start making use of different automatic testing strategies, when easily accessible (e.g. create a test from each fixed bug [110]). For example, UI testing remains a complex but important task considering how fast startups change their products through frequent validation tests on the market. In this concern, more research is needed to develop practical UI testing approaches that can be commercialized [111].

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0950584914000950

Review of cybersecurity assessment methods: Applicability perspective

Rafał Leszczyna, in Computers & Security, 2021

In this section the related work i.e. the existing studies that aim at reviewing cybersecurity assessment methods are presented. To identify the studies in a comprehensive way the research method described in Section 4 was applied. A combination of keyphrases “security assessment”, “review” and “survey” was applied to the search. When possible the results were refined to the computer science and related domains, or to research and review papers. The outcome of the process is summarised in Table 1. Two reviews related to different areas of cybersecurity assessment (methods, standards and tools) and two reviews that combine risk and security assessment methods, as well as 13 overviews that present mixed selections of cybersecurity assessment tools, techniques or approaches have been identified. In this section, these results are described. Also, five notable reviews of risk assessment techniques that provided important reference to this study, as far as the research method or criteria are concerned, are briefly presented.

Table 1. Reviews’ search summary. Abbreviations: RA – risk assessment, SA – security assessment.

SourceAll metadataTitleAbstractKeywordsManual searchRelevantRA-relatedSA-relatedSA-related surveys
ACM DL 120 3 64 1 n.a. 3 2 1 0
Elsevier SD n.a. 1 32 n.a. 8 3 5 2
Emerald 136 1 50 n.a. n.a. 2 0 2 1
IEEE Xplore 384 7 143 3 n.a. 4 0 4 1
Springer 2853* 121 n.a. n.a. 200 3 4 4 1
Wiley n.a. 0 n.a. 0 n.a. 8 3 8 0
EBSCOhost n.a. 6 41 0 n.a. 2 1 1 0
Scopus 6242 10 92 22 92 4 1 2 0
WoS 120 7 n.a. 89 n.a. 0 0 0 0
Total 9855 156 422 115 292 34 14 27 5

*The search embraced entire document contents but was restricted to computer science domain. Search results partially repeated findings from searches in other databases. Search results included three surveys focused on cybersecurity assessments and two reviews that combined security assessment with risk assessment methods.

3.1 Cybersecurity assessment methods reviews

Khan and Parkinson (Khan and Parkinson, 2018) review artificial intelligence approaches applied to vulnerability assessments. Solutions based on machine learning, automated planning and expert systems are described. The authors identify knowledge gaps and form recommendations for further research. Although the study focuses on artificial intelligence, a substantial part is devoted to manual, computer-aided and automated vulnerability assessment techniques. This includes the discussion of their drawbacks and associated challenges.

Leszczyna (2018) surveyed standards that could be applied to cybersecurity assessments in the modern electricity sector. The study evidenced the lack of a dedicated, sector-specific standard that addresses this topic. At the same time, more than 30 standards related to the field were identified. Among them, seven general-applicability standards that provide comprehensive security assessment guidance (with NIST SP 800-115 standing out (Scarfone et al., 2008)). The study was based on a systematic research method and transparent standards’ selection and evaluation criteria.

3.2 Reviews and overviews that combine risk and security assessment methods

In the review paper of Qassim et al. (2019) both, cybersecurity and risk assessment methods for industrial control systems, are studied. The authors elicit 11 standards and guidelines based on clear selection criteria. Each of the documents is briefly described and evaluated based on the criteria that concern the coverage of cybersecurity management processes (including those unrelated to cybersecurity assessments). Also, Cherdantseva et al. (2016) studied both cybersecurity and risk assessment techniques for industrial control systems. The research was based on a systematic approach that employed predefined selection and evaluation criteria and embraced 24 methods. Fabisiak et al. (2012) shortly described and compared 8 methods that are more or less-tightly related to cybersecurity management (including risk and security assessments). Although the methods’ elicitation procedure has not been described, neither selection criteria, the set of introduced comparison criteria is notable and can be used as a reference. It consists of as much as 35 elements.

3.3 Overviews of cybersecurity assessment tools, techniques and approaches

Overviews are studies that do not aim to be comprehensive. They briefly present certain selections of cybersecurity assessment tools, techniques or general approaches, usually without a justification of the particular choice of solutions. Also, they do not describe a research method for the identification and analysis of the solutions, nor indicate whether such a method was applied.

A comparative study of 20 cybersecurity testing methods based on seven analysis criteria was performed by Shahriar and Zulkernine Shahriar and Zulkernine (2009). The criteria include vulnerability coverage, source of test cases, test generation method or granularity of test cases. Besides, the automation aspect of the methods is investigated. This interesting study locates itself between an overview and a systematic review. It presents a large number of techniques, but a structured method of their identification is not described. The paper would certainly benefit from being updated and presented in a full-length article as the six-pages format of a conference paper allowed solely for demonstration of the criteria and a brief comparative discussion of methods without descriptions of individual assessment methods and their detailed analyses.

Shah and Mehtre (2015) provide an overview of vulnerability assessment and penetration testing techniques. After a detailed introduction to the subject area, the authors indicate four frameworks that are commonly adopted worldwide as far as cybersecurity assessments are concerned: Open Source Security Testing Methodology Manual (OSSTMM), Payment Card Industry Data Security Standards (PCI-DSS), Open Web Application Security Project (OWASP) and ISO/IEC 27001. Besides, compilations of several freely available tools for automated static analysis and penetration testing, including RATS, Flawfinder, Metasploit or Nessus are introduced. Also, Coffey et al. (2018) present an overview of a selection of vulnerability identification tools for SCADA systems. The tools include Nmap, Zmap, Nessus, Shodan and Passive Vulnerability Scanner (PVS).

Alternatively, a more structured approach is applied by Li et al. (2019) who evaluated five open-source vulnerability identification tools, namely ASIDE, ESVD, LAPSE+, SpotBugs and Find Security Bugs (FindSecBugs). This useful study focuses on analysing the effectiveness (the number of detected vulnerabilities, recall, precision, and discrimination rates) and the usability of the applications based on clearly defined metrics and research questions. Analogous research was conducted by Holm at al. Holm et al. (2011). Using a well-specified set of properties and a comparison framework, seven network scanners i.e. AVDS, Patchlink scan, Nessus, NeXpose, QualysGuard, SAINT and McAfee VM were analysed. Also, Lykou et al. (2019) analyse cybersecurity assessment tools in a more systematic manner. The tools comprise the Control System Cyber Security Self-Assessment Tool (CS2SAT), Cyber Security Evaluation Tool (CSET), SCADA Security Assessment Tool (SSAT) and Cyber Resilience Review Self-Assessment Package (CRR). They contain features that make them particularly suitable to the evaluations of industrial control systems and critical infrastructures. In another study dedicated to industrial control systems (Hahn and Govindarasu, 2011), the authors evaluated tools that support cybersecurity assessments in respect to their compliance to the NERC’s cybersecurity requirements that are obligatory for critical infrastructures in the U.S. (NERC, 2017).

The Dalalana Bertoglio and Zorzo (2017) overview focused on penetration testing, points out the OSSTMM, Information Systems Security Assessment Framework (ISSAF), Penetration Testing Execution Standard (PTES), NIST SP 800-115 guidelines and OWASP. The authors identified 72 tools that support penetration testing. Among them, Acunetix, WebInspect, AppScan, Metasploit, Nessus, NeXpose, Nikto, Nmap, Paros, QualysGuard, WebScarab and Wireshark are indicated as the most common. An overview dedicated to vulnerability assessments in autonomic systems is presented by Barrere et al. (2014). A substantial part of the article is devoted to vulnerability description languages such as Common Vulnerabilities and Exposures (CVE), the Intrusion Detection Message Exchange Format (IDMEF) or Open Vulnerability and Assessment Language (OVAL). This is followed by the presentation of several tools for vulnerability identification, modelling and assessment, including Nessus, OpenVAS or SAINT, but also MulVAL, DOVAL and ACML. The paper concludes by demarcating the directions of future research in the field. The overview of Li et al. (2011) is focused on model-based security assessments. Several techniques that employ tree structures, graphs and Petri Nets are briefly described.

An overview that concentrates on approach types rather than individual techniques of security assessment is given in Weiss (2008). Also, Nath (2011) focuses on approaches rather than individual techniques, however, the study is concentrated on vulnerability assessments. The approaches include model-based vulnerability analysis, vulnerability assessments using honeynets or combined black-box and white-box testing.

3.4 Notable reviews of risk assessment techniques

During the search for reviews of cybersecurity assessment methods several studies concentrated exclusively on risk assessment (Giannopoulos et al., 2012; Gritzalis et al., 2018; Ionita and Hartel, 2013; Meriah and Rabai, 2018; Wangen et al., 2018) were identified. Although they do not provide information on cybersecurity assessment methods, it is important to note them as they apply systematic research approaches that are worth analysing and following. For instance, Gritzalis et al. (2018) applied a structured research method with predefined selection and evaluation criteria to comprehensively evaluate 10 risk assessment methods commonly applied by the industry. Wangen et al. (2018) systematically developed a framework for evaluating the completeness of risk assessment methods and applied it to evaluate 11 popular approaches chosen based on transparent selection criteria. An alternative structured study of risk assessment techniques was conducted by Ionita and Hartel Ionita and Hartel (2013), who analysed 14 methods, based on transparent inclusion and analysis criteria.

3.5 Reviews’ search summary

As has been described earlier in this section, only two systematic reviews dedicated to cybersecurity assessment methods have been identified. These studies aim at the comprehensiveness of results by applying a transparent (described in the paper) research method. However, the scope of the papers was narrowed to artificial intelligence approaches applied to vulnerability assessments (Khan and Parkinson, 2018) or the standards for security assessments in the electricity sector (Leszczyna, 2018).

Overviews, on the other hand, only briefly present mostly small selections of solutions without describing the method of their identification, nor justifying the choice of the particular set of the solutions. Thus, the completeness or comprehensiveness of the studies can not be evaluated. As a result, they do not provide confidence that any important contributions have not been omitted. Besides, a substantial part of the results was related to tools or risk assessments. The latter, although connected to security assessment should be properly distinguished from it (see Section 2).

It has become evident, that despite the importance of the subject, the studies that would systematically identify, compile and evaluate cybersecurity assessment methods are missing. This finding created the main incentive for performing an extensive and systematic literature analysis that is presented in the reminder of this paper.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0167404821002005

Which one of the following is elicitation technique?

Elicitation techniques include interviews, observation of either naturally occurring behavior (including as part of participant observation) or behavior in a laboratory setting, or the analysis of assigned tasks.

What is the name of the commonly used elicitation techniques?

Having said that, brainstorming, document analysis, interviews, prototyping and workshops are the most widely used requirement elicitation techniques.

How many types of elicitation techniques are there?

The 9 Elicitation Techniques. In its simplest form, elicitation is the process of discovering requirements or business needs.

What are examples of elicitation?

Simple elicitation techniques include the use of visual items such as pictures, photographs, freehand drawing and real objects to draw vocabulary from the class. Other examples include using mime, dialogue and example sentences on the whiteboard to encourage the student's input.