986 resultados para Communicating search. Tuberculin test
Resumo:
The effectiveness of pre-play communication in achieving efficientoutcomes has long been a subject of controversy. In some environments,cheap talk may help to achieve coordination. However, Aumannconjectures that, in a variant of the Stag Hunt game, a signal forefficient play is not self-enforcing and concludes that an "agreementto play [the efficient outcome] conveys no information about what theplayers will do." Harsanyi and Selten (1988) cite this example as anillustration of risk-dominance vs. payoff-dominance. Farrell and Rabin(1996) agree with the logic, but suspect that cheap talk willnonetheless achieve efficiency. The conjecture is tested with one-waycommunication. When the sender first chooses a signal and then anaction, there is impressive coordination: a 94% probability for thepotentially efficient (but risky) play, given a signal for efficientplay. Without communication, efforts to achieve efficiency wereunsuccessful, as the proportion of B moves is only 35%. I also test ahypothesis that the order of the action and the signal affects theresults, finding that the decision order is indeed important. WhileAumann s conjecture is behaviorally disconfirmed when the signal isdetermined initially, the signal s credibility seems to be much moresuspect when the sender is known to have first chosen an action, andthe results are not statistically distinguishable from those whenthere is no signal. Some applications and issues in communication andcoordination are discussed.
Resumo:
This paper argues that low-stakes test scores, available in surveys, may be partially determinedby test-taking motivation, which is associated with personality traits but not with cognitiveability. Therefore, such test score distributions may not be informative regarding cognitiveability distributions. Moreover, correlations, found in survey data, between high test scoresand economic success may be partially caused by favorable personality traits. To demonstratethese points, I use the coding speed test that was administered without incentives to NationalLongitudinal Survey of Youth 1979 (NLSY) participants. I suggest that due to its simplicityits scores may especially depend on individuals' test-taking motivation. I show that controllingfor conventional measures of cognitive skills, the coding speed scores are correlated with futureearnings of male NLSY participants. Moreover, the coding speed scores of highly motivated,though less educated, population (potential enlists to the armed forces) are higher than NLSYparticipants' scores. I then use controlled experiments to show that when no performance-basedincentives are provided, participants' characteristics, but not their cognitive skills, affect effortinvested in the coding speed test. Thus, participants with the same ability (measured by theirscores on an incentivized test) have significantly different scores on tests without performance-based incentives.
Resumo:
A test-chamber (K&L-Chamber) made of cardboard and acrylic plastic, and consisting in four sections (A, B, C and D) was developed by Klowden & Lea (1978) for Aedes aegypti host-seeking behavior studies. Later, Foster & Lutes (1985) also used an identical chamber to successfully evaluate the efficacy of electronic repellers. It was described here a modified K&L-Chamber for behavioral studies of Ae. aegypti adults. The chamber was made in polystyrene, consisting of three sections (A, B and C) and using a human hand and a fluorescent lamp as stimulus to attract the mosquitoes. The suitability of the present test-chamber was validated assaying 80 replicates and releasing 10 Ae. aegypti females in each replicate. The females were released in the section A and allowed to fly to the section C. A mean of 96.0% (s.e. 0.213) Ae. aegypti females successfully reached section C. The present test-chamber is cheaper and easier to handle and as efficient as K&L-Chamber, when compared to Foster & Lutes (1978) that noticed 93.8% of Ae. aegypti reaching the trap section.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
Firms compete by choosing both a price and a design from a family of designs thatcan be represented as demand rotations. Consumers engage in costly sequential searchamong firms. Each time a consumer pays a search cost he observes a new offering. Anoffering consists of a price quote and a new good, where goods might vary in the extentto which they are good matches for the consumer. In equilibrium, only two design-styles arise: either the most niche where consumers are likely to either love or loathethe product, or the broadest where consumers are likely to have similar valuations. Inequilibrium, different firms may simultaneously offer both design-styles. We performcomparative statics on the equilibrium and show that a fall in search costs can lead tohigher industry prices and profits and lower consumer surplus. Our analysis is relatedto discussions of how the internet has led to the prevalence of niche goods and the"long tail" phenomenon.
Resumo:
This paper studies two important reasons why people violate procedure invariance, loss aversion and scale compatibility. The paper extends previous research on loss aversion and scale compatibility by studying loss aversion and scale compatibility simultaneously, by looking at a new decision domain, medical decision analysis, and by examining the effect of loss aversion and scale compatibility on "well-contemplated preferences." We find significant evidence both of loss aversion and scale compatibility. However, the sizes of the biases due to loss aversion and scale compatibility vary over trade-offs and most participants do not behave consistently according to loss aversion or scale compatibility. In particular, the effect of loss aversion in medical trade-offs decreases with duration. These findings are encouraging for utility measurement and prescriptive decision analysis. There appear to exist decision contexts in which the effects of loss aversion and scale compatibility can be minimized and utilities can be measured that do not suffer from these distorting factors.
Resumo:
This paper tests the internal consistency of time trade-off utilities.We find significant violations of consistency in the direction predictedby loss aversion. The violations disappear for higher gauge durations.We show that loss aversion can also explain that for short gaugedurations time trade-off utilities exceed standard gamble utilities. Ourresults suggest that time trade-off measurements that use relativelyshort gauge durations, like the widely used EuroQol algorithm(Dolan 1997), are affected by loss aversion and lead to utilities thatare too high.
Resumo:
This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
BACKGROUND: Tuberculosis (TB) screening in prisons is recommended, but the appropriate methods remain controversial. Studies evaluating screening in remand prisons arc scarce. METHOD: Between 1997 and 2001, voluntary screening based on the tuberculin skin test (TST) was offered to all prisoners on entry into the largest remand prison in Switzerland. Prisoners with positive results underwent chest X-rays. We analysed this information collected in an anonymous database. RESULTS: A total of 4890 prisoners entered the prison and were eligible for screening; 3779 (77.3%) had TST performed on average 9 days after arrival: 46.9% were positive (induration >= 10 mm). Positive TST rates were similar over the 5 years. Women were more likely to have a negative TST (60.4%) than men (47.7%; P < 0.001, Pearson's chi(2) 16.5). Positive TSTs varied according to the prisoner's country of origin (64% for sub-Saharan Africa, 57% for Eastern Europe, 56% for North Africa, 51% for Asia and 34% for North and West Europe). CONCLUSION: The percentage of TST-positive subjects was high, and most did not receive preventive treatment for latent TB. The usefulness of systematic TST for all prisoners on entry is limited, as diagnosis of TB disease usually remains the priority in prisons. Keywords
Resumo:
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.
Resumo:
BACKGROUND: Internet is commonly used by the general population, notably for health information-seeking. There has been little research into its use by patients treated for a psychiatric disorder. AIM: To evaluate the use of internet by patients with psychiatric disorders in searching for general and medical information. METHODS: In 2007, 319 patients followed in a university hospital psychiatric out-patient clinic, completed a 28-items self-administered questionnaire. RESULTS: Two hundred patients surveyed were internet users. Most of them (68.5%) used internet in order to find health-related information. Only a small part of the patients knew and used criteria reflecting the quality of contents of the websites consulted. Knowledge of English and private Internet access were the factors significantly associated with the search of information on health on Internet. CONCLUSIONS: Internet is currently used by patients treated for psychiatric disorders, especially for medical seeking information.