ResearchPad - decision-theory https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Choosing what we like vs liking what we choose: How choice-induced preference change might actually be instrumental to decision-making]]> https://www.researchpad.co/article/elastic_article_15722 For more than 60 years, it has been known that people report higher (lower) subjective values for items after having selected (rejected) them during a choice task. This phenomenon is coined “choice-induced preference change” or CIPC, and its established interpretation is that of “cognitive dissonance” theory. In brief, if people feel uneasy about their choice, they later convince themselves, albeit not always consciously, that the chosen (rejected) item was actually better (worse) than they had originally estimated. While this might make sense from an intuitive psychological standpoint, it is challenging from a theoretical evolutionary perspective. This is because such a cognitive mechanism might yield irrational biases, whose adaptive fitness would be unclear. In this work, we consider an alternative possibility, namely that CIPC is -at least partially- due to the refinement of option value representations that occurs while people are pondering about choice options. For example, contemplating competing possibilities during a choice may highlight aspects of the alternative options that were not considered before. In the context of difficult decisions, this would enable people to reassess option values until they reach a satisfactory level of confidence. This makes CIPC the epiphenomenal outcome of a cognitive process that is instrumental to the decision. Critically, our hypothesis implies novel predictions about how observed CIPC should relate to two specific meta-cognitive processes, namely: choice confidence and subjective certainty regarding pre-choice value judgments. We test these predictions in a behavioral experiment where participants rate the subjective value of food items both before and after choosing between equally valued items; we augment this traditional design with both reports of choice confidence and subjective certainty about value judgments. The results confirm our predictions and provide evidence that many quantitative features of CIPC (in particular: its relationship with metacognitive judgments) may be explained without ever invoking post-choice cognitive dissonance reduction explanation. We then discuss the relevance of our work in the context of the existing debate regarding the putative cognitive mechanisms underlying CIPC.

]]>
<![CDATA[Variable weights theory and its application to multi-attribute group decision making with intuitionistic fuzzy numbers on determining decision maker’s weights]]> https://www.researchpad.co/article/5c89771bd5eed0c4847d2486

The determination of the weights of decision makers (DMs) is an important problem in multi-attribute group decision making. Many approaches have been presented to determine DMs’ weights. However, the computed weight vectors of DMs are usually assumed to be constant in existing studies, and this may cause irrationalities in the decision results. Therefore, this article proposes a novel method to determine DMs’ weights based on variable weights theory in which the evaluation information is described as intuitionistic fuzzy sets (IFSs). First, DMs provide their assessment with IFSs, and the intuitionistic fuzzy weighted averaging (IFWA) operator is applied to obtain weighted decision matrix based on the prior given DMs’ and attributes’ weights. Second, the DMs’ weights are obtained based on variable weights theory, and an alternative decision can be computed. Finally, the converted value of the achieved IFS of each alternative is calculated, and the best appropriate alternative is acquired. Two illustrative examples and the comparisons with exsiting approaches are also used to reflect the effectiveness of the proposed approach.

]]>
<![CDATA[Sour grapes and sweet victories: How actions shape preferences]]> https://www.researchpad.co/article/5c3d00fad5eed0c4840373bf

Classical decision theory postulates that choices proceed from subjective values assigned to the probable outcomes of alternative actions. Some authors have argued that opposite causality should also be envisaged, with choices influencing subsequent values expressed in desirability ratings. The idea is that agents may increase their ratings of items that they have chosen in the first place, which has been typically explained by the need to reduce cognitive dissonance. However, evidence in favor of this reverse causality has been the topic of intense debates that have not reached consensus so far. Here, we take a novel approach using Bayesian techniques to compare models in which choices arise from stable (but noisy) underlying values (one-way causality) versus models in which values are in turn influenced by choices (two-way causality). Moreover, we examined whether in addition to choices, other components of previous actions, such as the effort invested and the eventual action outcome (success or failure), could also impact subsequent values. Finally, we assessed whether the putative changes in values were only expressed in explicit ratings, or whether they would also affect other value-related behaviors such as subsequent choices. Behavioral data were obtained from healthy participants in a rating-choice-rating-choice-rating paradigm, where the choice task involves deciding whether or not to exert a given physical effort to obtain a particular food item. Bayesian selection favored two-way causality models, where changes in value due to previous actions affected subsequent ratings, choices and action outcomes. Altogether, these findings may help explain how values and actions drift when several decisions are made successively, hence highlighting some shortcomings of classical decision theory.

]]>
<![CDATA[The quest for an optimal alpha]]> https://www.researchpad.co/article/5c3667f0d5eed0c4841a6a36

Researchers who analyze data within the framework of null hypothesis significance testing must choose a critical “alpha” level, α, to use as a cutoff for deciding whether a given set of data demonstrates the presence of a particular effect. In most fields, α = 0.05 has traditionally been used as the standard cutoff. Many researchers have recently argued for a change to a more stringent evidence cutoff such as α = 0.01, 0.005, or 0.001, noting that this change would tend to reduce the rate of false positives, which are of growing concern in many research areas. Other researchers oppose this proposed change, however, because it would correspondingly tend to increase the rate of false negatives. We show how a simple statistical model can be used to explore the quantitative tradeoff between reducing false positives and increasing false negatives. In particular, the model shows how the optimal α level depends on numerous characteristics of the research area, and it reveals that although α = 0.05 would indeed be approximately the optimal value in some realistic situations, the optimal α could actually be substantially larger or smaller in other situations. The importance of the model lies in making it clear what characteristics of the research area have to be specified to make a principled argument for using one α level rather than another, and the model thereby provides a blueprint for researchers seeking to justify a particular α level.

]]>
<![CDATA[Estimating everyday risk: Subjective judgments are related to objective risk, mapping of numerical magnitudes and previous experience]]> https://www.researchpad.co/article/5c117b57d5eed0c484698be0

We aimed to investigate individual differences that associate with peoples’ acute risk perception for activities such as walking and giving birth, including objective risk and the mapping of numerical magnitudes. The Amazon Mechanical Turk platform was used, with 284 participants recruited (40% female) ranging between 19 and 68 years. Participants had to indicate the positions of (1) the relative death risk of activities on a horizontal-line with ‘very low risk of death’ and ‘very high risk of death’ as left and right anchors respectively and (2), numerical magnitudes on a horizontal-line ranging 0–1000. The MicroMort framework was used to index acute risk of death (one/million chance of dying from an accident). Previous experience with the activities, handedness, along with risk propensity and unrealistic optimism were also measured. Linear mixed-effects modelling was used to investigate predictors of subjective MicroMort judgments. Individuals subjectively judged activities to be riskier if the activity was objectively riskier, if they over-estimated on the numerical task (more so for low-risk activities as compared to high-risk), or if they had not experienced the activity previously. The observed relationship between the number line task and everyday risk judgments is in keeping with the idea of a common magnitude representation system. In conclusion, individuals are able to discriminate between activities varying in risk in an absolute sense, however intuition for judging the relative differences in risk is poor. The relationship between the misjudging of both risks and numerical magnitudes warrants further investigation, as may inform the development of risk communication strategies.

]]>
<![CDATA[Age-dependent Pavlovian biases influence motor decision-making]]> https://www.researchpad.co/article/5b4a28c5463d7e4513b89829

Motor decision-making is an essential component of everyday life which requires weighing potential rewards and punishments against the probability of successfully executing an action. To achieve this, humans rely on two key mechanisms; a flexible, instrumental, value-dependent process and a hardwired, Pavlovian, value-independent process. In economic decision-making, age-related decline in risk taking is explained by reduced Pavlovian biases that promote action toward reward. Although healthy ageing has also been associated with decreased risk-taking in motor decision-making, it is currently unknown whether this is a result of changes in Pavlovian biases, instrumental processes or a combination of both. Using a newly established approach-avoidance computational model together with a novel app-based motor decision-making task, we measured sensitivity to reward and punishment when participants (n = 26,532) made a ‘go/no-go’ motor gamble based on their perceived ability to execute a complex action. We show that motor decision-making can be better explained by a model with both instrumental and Pavlovian parameters, and reveal age-related changes across punishment- and reward-based instrumental and Pavlovian processes. However, the most striking effect of ageing was a decrease in Pavlovian attraction towards rewards, which was associated with a reduction in optimality of choice behaviour. In a subset of participants who also played an independent economic decision-making task (n = 17,220), we found similar decision-making tendencies for motor and economic domains across a majority of age groups. Pavlovian biases, therefore, play an important role in not only explaining motor decision-making behaviour but also the changes which occur through normal ageing. This provides a deeper understanding of the mechanisms which shape motor decision-making across the lifespan.

]]>
<![CDATA[Sequential cooperative spectrum sensing in the presence of dynamic Byzantine attack for mobile networks]]> https://www.researchpad.co/article/5b4a192b463d7e428027f890

Cooperative spectrum sensing (CSS) is envisaged as a powerful approach to improve the utilization of scarce radio spectrum resources, but it is threatened by Byzantine attack. Byzantine attack has been becoming a popular research topic in both academia and industry due to the demanding requirements of security. Extensive research mainly aims at mitigating the negative effect of Byzantine attack on CSS, but with some strong assumptions, such as attackers are in minority or trusted node(s) exist for data fusion, while paying little attention to a mobile scenario. This paper focuses on the issue of designing a general and reliable reference for CSS in a mobile network. Instead of the previously simplified attack, we develop a generic Byzantine attack model from sophisticated behaviors to conduct various attack strategies and derive the condition of which Byzantine attack makes the fusion center (FC) blind. Specifically, we propose a robust sequential CSS (SCSS) against dynamic Byzantine attack. Our proposed method solves the unreliability of the FC by means of delivery-based assessment to check consistency of individual sensing report, and innovatively reuses the sensing information from Byzantines via a novel weight allocation mechanism. Furthermore, trust value (TrV) ranking is exploited to proceed with a sequential test which generates a more accurate decision about the presence of phenomenon with fewer samples. Lastly, we carry out simulations on comparison of existing data fusion technologies and SCSS under dynamic Byzantine attack, and results verify the theoretical analysis and effectiveness of our proposed approach. We also conduct numerical analyses to demonstrate explicit impacts of secondary user (SU) density and mobility on the performance of SCSS.

]]>
<![CDATA[The Hot (Invisible?) Hand: Can Time Sequence Patterns of Success/Failure in Sports Be Modeled as Repeated Random Independent Trials?]]> https://www.researchpad.co/article/5989db09ab0ee8fa60bc9896

The long lasting debate initiated by Gilovich, Vallone and Tversky in is revisited: does a “hot hand” phenomenon exist in sports? Hereby we come back to one of the cases analyzed by the original study, but with a much larger data set: all free throws taken during five regular seasons () of the National Basketball Association (NBA). Evidence supporting the existence of the “hot hand” phenomenon is provided. However, while statistical traces of this phenomenon are observed in the data, an open question still remains: are these non random patterns a result of “success breeds success” and “failure breeds failure” mechanisms or simply “better” and “worse” periods? Although free throws data is not adequate to answer this question in a definite way, we speculate based on it, that the latter is the dominant cause behind the appearance of the “hot hand” phenomenon in the data.

]]>
<![CDATA[Collaborative Brain-Computer Interface for Aiding Decision-Making]]> https://www.researchpad.co/article/5989da52ab0ee8fa60b8e124

We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making.

]]>
<![CDATA[A rough set approach for determining weights of decision makers in group decision making]]> https://www.researchpad.co/article/5989db50ab0ee8fa60bdbd78

This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs’ decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member’ decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs’ evaluations and selections.

]]>
<![CDATA[Performance Feedback Processing Is Positively Biased As Predicted by Attribution Theory]]> https://www.researchpad.co/article/5989db0dab0ee8fa60bcaf3c

A considerable literature on attribution theory has shown that healthy individuals exhibit a positivity bias when inferring the causes of evaluative feedback on their performance. They tend to attribute positive feedback internally (e.g., to their own abilities) but negative feedback externally (e.g., to environmental factors). However, all empirical demonstrations of this bias suffer from at least one of the three following drawbacks: First, participants directly judge explicit causes for their performance. Second, participants have to imagine events instead of experiencing them. Third, participants assess their performance only after receiving feedback and thus differences in baseline assessments cannot be excluded. It is therefore unclear whether the classically reported positivity bias generalizes to setups without these drawbacks. Here, we aimed at establishing the relevance of attributions for decision-making by showing an attribution-related positivity bias in a decision-making task. We developed a novel task, which allowed us to test how participants changed their evaluations in response to positive and negative feedback about performance. Specifically, we used videos of actors expressing different facial emotional expressions. Participants were first asked to evaluate the actors’ credibility in expressing a particular emotion. After this initial rating, participants performed an emotion recognition task and did—or did not—receive feedback on their veridical performance. Finally, participants re-rated the actors’ credibility, which provided a measure of how they changed their evaluations after feedback. Attribution theory predicts that participants change their evaluations of the actors’ credibility toward the positive after receiving positive performance feedback and toward the negative after negative performance feedback. Our results were in line with this prediction. A control condition without feedback showed that correct or incorrect performance alone could not explain the observed positivity bias. Furthermore, participants’ behavior in our task was linked to the most widely used measure of attribution style. In sum, our findings suggest that positive and negative performance feedback influences the evaluation of task-related stimuli, as predicted by attribution theory. Therefore, our study points to the relevance of attribution theory for feedback processing in decision-making and provides a novel outlook for decision-making biases.

]]>
<![CDATA[Improved Learning in U.S. History and Decision Competence with Decision-Focused Curriculum]]> https://www.researchpad.co/article/5989db14ab0ee8fa60bccdb7

Decision making is rarely taught in high school, even though improved decision skills could benefit young people facing life-shaping decisions. While decision competence has been shown to correlate with better life outcomes, few interventions designed to improve decision skills have been evaluated with rigorous quantitative measures. A randomized study showed that integrating decision making into U.S. history instruction improved students’ history knowledge and decision-making competence, compared to traditional history instruction. Thus, integrating decision training enhanced academic performance and improved an important, general life skill associated with improved life outcomes.

]]>
<![CDATA[Robustness Elasticity in Complex Networks]]> https://www.researchpad.co/article/5989da46ab0ee8fa60b8be26

Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems.

]]>
<![CDATA[Empirical Confirmation of Creative Destruction from World Trade Data]]> https://www.researchpad.co/article/5989db46ab0ee8fa60bd8835

We show that world trade network datasets contain empirical evidence that the dynamics of innovation in the world economy indeed follows the concept of creative destruction, as proposed by J.A. Schumpeter more than half a century ago. National economies can be viewed as complex, evolving systems, driven by a stream of appearance and disappearance of goods and services. Products appear in bursts of creative cascades. We find that products systematically tend to co-appear, and that product appearances lead to massive disappearance events of existing products in the following years. The opposite–disappearances followed by periods of appearances–is not observed. This is an empirical validation of the dominance of cascading competitive replacement events on the scale of national economies, i.e., creative destruction. We find a tendency that more complex products drive out less complex ones, i.e., progress has a direction. Finally we show that the growth trajectory of a country’s product output diversity can be understood by a recently proposed evolutionary model of Schumpeterian economic dynamics.

]]>
<![CDATA[The Effects of Stability and Presentation Order of Rewards on Justice Evaluations]]> https://www.researchpad.co/article/5989da27ab0ee8fa60b81404

Justice research has evolved by elucidating the factors that affect justice evaluations, as well as their consequences. Unfortunately, few researchers have paid attention to the pattern of rewards over time as a predictor of justice evaluations. There are two main objectives of this research. First, it aims to test the effect of reward stability on justice evaluations. Based on justice theory and prospect theory, we assume that an under-reward at one time cannot be fully offset by an equivalent over-reward at another time. Therefore, in unstable reward systems the asymmetry of the effect of unjust rewards with opposite directions will produce a lower level of justice evaluations over time. The second objective of this research is to show the moderating effect of the presentation order (primacy vs. recency) of unstable rewards on justice evaluations. The results from a controlled experiment with five conditions, which presents the instability of rewards in different orders, confirm both the negative effect of unstable rewards and the stronger effect of primacy on justice evaluations.

]]>
<![CDATA[What Is True Halving in the Payoff Matrix of Game Theory?]]> https://www.researchpad.co/article/5989db25ab0ee8fa60bd00c6

In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player’s current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated.

]]>
<![CDATA[Action Being Character: A Promising Perspective on the Solution Concept of Game Theory]]> https://www.researchpad.co/article/5989da0dab0ee8fa60b7862e

The inconsistency of predictions from solution concepts of conventional game theory with experimental observations is an enduring question. These solution concepts are based on the canonical rationality assumption that people are exclusively self-regarding utility maximizers. In this article, we think this assumption is problematic and, instead, assume that rational economic agents act as if they were maximizing their implicit utilities, which turns out to be a natural extension of the canonical rationality assumption. Implicit utility is defined by a player's character to reflect his personal weighting between cooperative, individualistic, and competitive social value orientations. The player who actually faces an implicit game chooses his strategy based on the common belief about the character distribution for a general player and the self-estimation of his own character, and he is not concerned about which strategies other players will choose and will never feel regret about his decision. It is shown by solving five paradigmatic games, the Dictator game, the Ultimatum game, the Prisoner's Dilemma game, the Public Goods game, and the Battle of the Sexes game, that the framework of implicit game and its corresponding solution concept, implicit equilibrium, based on this alternative assumption have potential for better explaining people's actual behaviors in social decision making situations.

]]>
<![CDATA[Dynamic Health Policies for Controlling the Spread of Emerging Infections: Influenza as an Example]]> https://www.researchpad.co/article/5989dac2ab0ee8fa60bb136a

The recent appearance and spread of novel infectious pathogens provide motivation for using models as tools to guide public health decision-making. Here we describe a modeling approach for developing dynamic health policies that allow for adaptive decision-making as new data become available during an epidemic. In contrast to static health policies which have generally been selected by comparing the performance of a limited number of pre-determined sequences of interventions within simulation or mathematical models, dynamic health policies produce “real-time” recommendations for the choice of the best current intervention based on the observable state of the epidemic. Using cumulative real-time data for disease spread coupled with current information about resource availability, these policies provide recommendations for interventions that optimally utilize available resources to preserve the overall health of the population. We illustrate the design and implementation of a dynamic health policy for the control of a novel strain of influenza, where we assume that two types of intervention may be available during the epidemic: (1) vaccines and antiviral drugs, and (2) transmission reducing measures, such as social distancing or mask use, that may be turned “on” or “off” repeatedly during the course of epidemic. In this example, the optimal dynamic health policy maximizes the overall population's health during the epidemic by specifying at any point of time, based on observable conditions, (1) the number of individuals to vaccinate if vaccines are available, and (2) whether the transmission-reducing intervention should be either employed or removed.

]]>
<![CDATA[Path and Ridge Regression Analysis of Seed Yield and Seed Yield Components of Russian Wildrye (Psathyrostachys juncea Nevski) under Field Conditions]]> https://www.researchpad.co/article/5989dabbab0ee8fa60baed2f

The correlations among seed yield components, and their direct and indirect effects on the seed yield (Z) of Russina wildrye (Psathyrostachys juncea Nevski) were investigated. The seed yield components: fertile tillers m-2 (Y1), spikelets per fertile tillers (Y2), florets per spikelet- (Y3), seed numbers per spikelet (Y4) and seed weight (Y5) were counted and the Z were determined in field experiments from 2003 to 2006 via big sample size. Y1 was the most important seed yield component describing the Z and Y2 was the least. The total direct effects of the Y1, Y3 and Y5 to the Z were positive while Y4 and Y2 were weakly negative. The total effects (directs plus indirects) of the components were positively contributed to the Z by path analyses. The seed yield components Y1, Y2, Y4 and Y5 were significantly (P<0.001) correlated with the Z for 4 years totally, while in the individual years, Y2 were not significant correlated with Y3, Y4 and Y5 by Peason correlation analyses in the five components in the plant seed production. Therefore, selection for high seed yield through direct selection for large Y1, Y2 and Y3 would be effective for breeding programs in grasses. Furthermore, it is the most important that, via ridge regression, a steady algorithm model between Z and the five yield components was founded, which can be closely estimated the seed yield via the components.

]]>
<![CDATA[Between Order and Disorder: A ‘Weak Law’ on Recent Electoral Behavior among Urban Voters?]]> https://www.researchpad.co/article/5989da6dab0ee8fa60b93676

A new viewpoint on electoral involvement is proposed from the study of the statistics of the proportions of abstentionists, blank and null, and votes according to list of choices, in a large number of national elections in different countries. Considering 11 countries without compulsory voting (Austria, Canada, Czech Republic, France, Germany, Italy, Mexico, Poland, Romania, Spain, and Switzerland), a stylized fact emerges for the most populated cities when one computes the entropy associated to the three ratios, which we call the entropy of civic involvement of the electorate. The distribution of this entropy (over all elections and countries) appears to be sharply peaked near a common value. This almost common value is typically shared since the 1970s by electorates of the most populated municipalities, and this despite the wide disparities between voting systems and types of elections. Performing different statistical analyses, we notably show that this stylized fact reveals particular correlations between the blank/null votes and abstentionists ratios. We suggest that the existence of this hidden regularity, which we propose to coin as a ‘weak law on recent electoral behavior among urban voters’, reveals an emerging collective behavioral norm characteristic of urban citizen voting behavior in modern democracies. Analyzing exceptions to the rule provides insights into the conditions under which this normative behavior can be expected to occur.

]]>