990 resultados para Information bias


Relevância:

30.00% 30.00%

Publicador:

Resumo:

When we see a stranger's face we quickly form impressions of his or her personality, and expectations of how the stranger might behave. Might these intuitive character judgements bias source monitoring? Participants read headlines "reported" by a trustworthy- and an untrustworthy-looking reporter. Subsequently, participants recalled which reporter provided each headline. Source memory for likely-sounding headlines was most accurate when a trustworthy-looking reporter had provided the headlines. Conversely, source memory for unlikely-sounding headlines was most accurate when an untrustworthy-looking reporter had provided the headlines. This bias appeared to be driven by the use of decision criteria during retrieval rather than differences in memory encoding. Nevertheless, the bias was apparently unrelated to variations in subjective confidence. These results show for the first time that intuitive, stereotyped judgements of others' appearance can bias memory attributions analogously to the biases that occur when people receive explicit information to distinguish sources. We suggest possible real-life consequences of these stereotype-driven source-monitoring biases. © 2010 Psychology Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Similar to classic Signal Detection Theory (SDT), recent optimal Binary Signal Detection Theory (BSDT) and based on it Neural Network Assembly Memory Model (NNAMM) can successfully reproduce Receiver Operating Characteristic (ROC) curves although BSDT/NNAMM parameters (intensity of cue and neuron threshold) and classic SDT parameters (perception distance and response bias) are essentially different. In present work BSDT/NNAMM optimal likelihood and posterior probabilities are analytically analyzed and used to generate ROCs and modified (posterior) mROCs, optimal overall likelihood and posterior. It is shown that for the description of basic discrimination experiments in psychophysics within the BSDT a ‘neural space’ can be introduced where sensory stimuli as neural codes are represented and decision processes are defined, the BSDT’s isobias curves can simultaneously be interpreted as universal psychometric functions satisfying the Neyman-Pearson objective, the just noticeable difference (jnd) can be defined and interpreted as an atom of experience, and near-neutral values of biases are observers’ natural choice. The uniformity or no-priming hypotheses, concerning the ‘in-mind’ distribution of false-alarm probabilities during ROC or overall probability estimations, is introduced. The BSDT’s and classic SDT’s sensitivity, bias, their ROC and decision spaces are compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auditor decisions regarding the causes of accounting misstatements can have an audit effectiveness and efficiency. Specifically, overconfidence in one's decision can lead to an ineffective audit, whereas underconfidence in one's decision can lead to an inefficient audit. This dissertation explored the implications of providing various types of information cues to decision-makers regarding an Analytical Procedure task and investigated the relationship between different types of evidence cues (confirming, disconfirming, redundant or non-redundant) and the reduction in calibration bias. Information was collected using a laboratory experiment, from 45 accounting students participants. Research questions were analyzed using a 2 x 2 x 2 between-subject and within-subject analysis of covariance (ANCOVA). ^ Results indicated that presenting subjects with information cues dissimilar to the choice they made is an effective intervention in reducing the common overconfidence found in decision-making. In addition, other information characteristics, specifically non-redundant information can help in reducing a decision-maker's overconfidence/calibration bias for difficulty (compared to easy) decision-tasks. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic research on expectancy effects suggests that investigative interviewers with pre-conceived notions about a crime may negatively influence the interview process in meaningful ways, yet many interviewing protocols recommend that interviewers review all available information prior to conducting their interviews. Previous research suggests that interviewers with no pre-interview knowledge elicit more detailed and accurate accounts than their informed counterparts (Cantlon, et al., 1996; Rivard et al., under review). The current study investigated whether (a) the benefit of blind versus informed interviewing is moderated by cautionary interviewer instructions to avoid suggestive questions and (b) whether any possible effects of pre-interview information extend beyond the immediate context of the forensic interview. ^ Paired participants (N = 584) were assigned randomly either to the role of interviewer or witness. Witnesses viewed a mock crime video and were interviewed one week later by an interviewer who received either correct, incorrect, or no information about the crime event. Half of the interviewers were assigned randomly to receive additional instructions to avoid suggestive questions. All participants returned 1 week after the interview to recall the crime video (for the witness) or the information recalled by the witness during the interview (for the interviewer). All interviews and delayed recall measures were scored for the quantity and accuracy of information reported. ^ Results replicate earlier findings that blind interviewers elicit more information from witnesses, without a decrease in accuracy rate. However instructions to avoid suggestive questions did not moderate the effect of blind versus informed interviewing on witness recall during the interview. Results further demonstrate that the effects of blind versus non-blind interviewing may extend beyond the immediate context of the interview to a later recall attempt. With instructions to avoid suggestive questions, witnesses of blind interviewers were more accurate than witnesses of incorrectly informed interviewers when recalling the event 1 week later. In addition, blind interviewers had more accurate memories for the witnesses' account of the event during the interview compared to non-blind interviewers.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes a manager's optimal ex-ante reporting system using a Bayesian persuasion approach (Kamenica and Gentzkow (2011)) in a setting where investors affect cash flows through their decision to finance the firm's investment opportunities, possibly assisted by the costly acquisition of additional information (inspection). I examine how the informativeness and the bias of the optimal system are determined by investors' inspection cost, the degree of incentive alignment between the manager and the investor, and the prior belief that the project is profitable. I find that a mis-aligned manager's system is informative

only when the market prior is pessimistic and is always positively biased; this bias decreases as investors' inspection cost decreases. In contrast, a well-aligned manager's system is fully revealing when investors' inspection cost is high, and is counter-cyclical to the market belief when the inspection cost is low: It is positively (negatively) biased when the market belief is pessimistic (optimistic). Furthermore, I explore the extent to which the results generalize to a case with managerial manipulation and discuss the implications for investment efficiency. Overall, the analysis describes the complex interactions among determinants of firm disclosures and governance, and offers explanations for the mixed empirical results in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ideal conception of a judge is that of a neutral arbitrator. However, there exist good reasons to believe that personal characteristics, including professional experiences, bias judges. Such suspicions inspired two hypotheses: (1) judges that are former prosecutors are biased in favor of the government in criminal appeals; (2) judges that are former criminal defense attorneys are biased in favor of the criminal appellant. These hypotheses were tested by gathering professional information about state supreme court judges in the south during the years from 1995 until 1998. That was then matched to an existing database that recorded those judges’ demographics and decisions in criminal appeals during that time. Logistic regressions of that data revealed that despite when other characteristics, including gender, race, and legal experience, were accounted for, criminal defense remained a statistically significant predictor. Judges with a background in criminal defense were more likely to reverse criminal court decisions. In contrast, prosecutorial experience was not a good predictor of how a judge ruled. Judges that had backgrounds in prosecution did not rule much differently than those that did not have such a background.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extractive industry is characterized by high levels of risk and uncertainty. These attributes create challenges when applying traditional accounting concepts (such as the revenue recognition and matching concepts) to the preparation of financial statements in the industry. The International Accounting Standards Board (2010) states that the objective of general purpose financial statements is to provide useful financial information to assist the capital allocation decisions of existing and potential providers of capital. The usefulness of information is defined as being relevant and faithfully represented so as to best aid in the investment decisions of capital providers. Value relevance research utilizes adaptations of the Ohlson (1995) to assess the attribute of value relevance which is one part of the attributes resulting in useful information. This study firstly examines the value relevance of the financial information disclosed in the financial reports of extractive firms. The findings reveal that the value relevance of information disclosed in the financial reports depends on the circumstances of the firm including sector, size and profitability. Traditional accounting concepts such as the matching concept can be ineffective when applied to small firms who are primarily engaged in nonproduction activities that involve significant levels of uncertainty such as exploration activities or the development of sites. Standard setting bodies such as the International Accounting Standards Board and the Financial Accounting Standards Board have addressed the financial reporting challenges in the extractive industry by allowing a significant amount of accounting flexibility in industryspecific accounting standards, particularly in relation to the accounting treatment of exploration and evaluation expenditure. Therefore, secondly this study examines whether the choice of exploration accounting policy has an effect on the value relevance of information disclosed in the financial reports. The findings show that, in general, the Successful Efforts method produces value relevant information in the financial reports of profitable extractive firms. However, specifically in the oil & gas sector, the Full Cost method produces value relevant asset disclosures if the firm is lossmaking. This indicates that investors in production and non-production orientated firms have different information needs and these needs cannot be simultaneously fulfilled by a single accounting policy. In the mining sector, a preference by large profitable mining companies towards a more conservative policy than either the Full Cost or Successful Efforts methods does not result in more value relevant information being disclosed in the financial reports. This finding supports the fact that the qualitative characteristic of prudence is a form of bias which has a downward effect on asset values. The third aspect of this study is an examination of the effect of corporate governance on the value relevance of disclosures made in the financial reports of extractive firms. The findings show that the key factor influencing the value relevance of financial information is the ability of the directors to select accounting policies which reflect the economic substance of the particular circumstances facing the firms in an effective way. Corporate governance is found to have an effect on value relevance, particularly in the oil & gas sector. However, there is no significant difference between the exploration accounting policy choices made by directors of firms with good systems of corporate governance and those with weak systems of corporate governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The South America southern coast exhibits many outcrops with abundant shell beds, from the Pleistocene through the Recent. How much biological information is preserved within these shell beds? Or, what is the actual probability a living community has to leave a fossil record corresponding to these shell deposits? Although ecological and biogeographical aspects might had been pointed, considering these temporal scales, up to the moment there is no taphonomically-oriented studies available. Quantitative comparisons between living (LAs), death (DAs) and fossil assemblages (FAs) are important not only in strictly taphonomic studies, but have grown a leading tool for conservation paleobiology analysis. Comparing LAs, DAs and FAs from estuaries and lagoons in the Rio Grande do Sul Coastal Plain makes possible to quantitatively understand the nature and quantity of biological information preserved in fossil associations in Holocene lagoon facies. As already noted by several authors, spatial scale parts the analysis, but we detected that the FAs refl ects live ones, rather than dead ones, as previously not realized. The results herein obtained illustrates that species present in DA are not as good preserved in recent (Holocene) fossil record as originally thought. Strictly lagoon species are most prone to leave fossil record. The authors consider that the fi delity pattern here observed for estuarine mollusks to be driven by (i) high temporal and spatial variability in the LAs, (ii) spatial mixing in the DA and (iii) differential preservation of shells, due to long residence times in the taphonomically active zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis dwells upon topics in behavioural economics: information and fairness, with five research papers. The first two contributions are concerned with the extension of standard auction formats with information acquisition strategies. The third paper addresses global games framed as a speculative attack and tests theoretical predictions for risk and ambiguity. The fourth contribution deals with disclosing conflicts of interest, where one player has a monetary incentive to deceive. The last paper extends a standard model of social preferences with a second fairness dimension and studies how economic agents distort fairness norms exhibiting a self-serving bias effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Annual counts of migrating raptors at fixed observation points are a widespread practice, and changes in numbers counted over time, adjusted for survey effort, are commonly used as indices of trends in population size. Unmodeled year-to-year variation in detectability may introduce bias, reduce precision of trend estimates, and reduce power to detect trends. We conducted dependent double-observer surveys at the annual fall raptor migration count at Lucky Peak, Idaho, in 2009 and 2010 and applied Huggins closed-capture removal models and information-theoretic model selection to determine the relative importance of factors affecting detectability. The most parsimonious model included effects of observer team identity, distance, species, and day of the season. We then simulated 30 years of counts with heterogeneous individual detectability, a population decline (λ = 0.964), and unexplained random variation in the number of available birds. Imperfect detectability did not bias trend estimation, and increased the time required to achieve 80% power by less than 11%. Results suggested that availability is a greater source of variance in annual counts than detectability; thus, efforts to account for availability would improve the monitoring value of migration counts. According to our models, long-term trends in observer efficiency or migratory flight distance may introduce substantial bias to trend estimates. Estimating detectability with a novel count protocol like our double-observer method is just one potential means of controlling such effects. The traditional approach of modeling the effects of covariates and adjusting the index may also be effective if ancillary data is collected consistently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.