150 resultados para Probability of choice

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Australia there is growing interest in a national curriculum to replace the variety of matriculation credentials managed by State Education departments, ostensibly to address increasing population mobility. Meanwhile, the International Baccalaureate (IB) is attracting increasing interest and enrolments in State and private schools in Australia, and has been considered as one possible model for a proposed Australian Certificate of Education. This paper will review the construction of this curriculum in Australian public discourse as an alternative frame for producing citizens, and ask why this design appeals now, to whom, and how the phenomenon of its growing appeal might inform national curricular debates. The IB’s emergence is understood with reference to the larger context of neo-liberal marketization policies, neo-conservative claims on the curriculum and middle class strategy. The paper draws on public domain documents from the IB Organisation and newspaper reportage to demonstrate how the IB is constructed for public consumption in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Australia, the decision to home educate is becoming increasingly popular (cf. Harding & Farrell, 2003; Townsend, 2012). In spite of its increasing popularity, the reasons home education is chosen by Australian families is under-researched (cf. Jackson & Allan, 2010). This paper reports on a case study that set out to explore the links between families that unschool and the parenting philosophies they follow. In- depth, qualitative interviews were conducted with a group of home education families in one of Australia’s most populated cities. Data were analysed using Critical Discourse Analysis. The analysis revealed that there were links between the parents’ beliefs about home education and their adherence to Attachment Parenting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study investigated the behavioral and neuropsychological characteristics of decision-making behavior during a gambling task as well as how these characteristics may relate to the Somatic Marker Hypothesis and the Frequency of Gain model. The applicability to intertemporal choice was also discussed. Patterns of card selection during a computerized interpretation of the Iowa Gambling Task were assessed for 10 men and 10 women. Steady State Topography was employed to assess cortical processing throughout this task. Results supported the hypothesis that patterns of card selection were in line with both theories. As hypothesized, these 2 patterns of card selection were also associated with distinct patterns of cortical activity, suggesting that intertemporal choice may involve the recruitment of right dorsolateral prefrontal cortex for somatic labeling, left fusiform gyrus for object representations, and the left dorsolateral prefrontal cortex for an analysis of the associated frequency of gain or loss. It is suggested that processes contributing to intertemporal choice may include inhibition of negatively valenced options, guiding decisions away from those options, as well as computations favoring frequently rewarded options.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose This study evaluated the impact of patient set-up errors on the probability of pulmonary and cardiac complications in the irradiation of left-sided breast cancer. Methods and Materials Using the CMS XiO Version 4.6 (CMS Inc., St Louis, MO) radiotherapy planning system's NTCP algorithm and the Lyman -Kutcher-Burman (LKB) model, we calculated the DVH indices for the ipsilateral lung and heart and the resultant normal tissue complication probabilities (NTCP) for radiation-induced pneumonitis and excess cardiac mortality in 12 left-sided breast cancer patients. Results Isocenter shifts in the posterior direction had the greatest effect on the lung V20, heart V25, mean and maximum doses to the lung and the heart. Dose volume histograms (DVH) results show that the ipsilateral lung V20 tolerance was exceeded in 58% of the patients after 1cm posterior shifts. Similarly, the heart V25 tolerance was exceeded after 1cm antero-posterior and left-right isocentric shifts in 70% of the patients. The baseline NTCPs for radiation-induced pneumonitis ranged from 0.73% - 3.4% with a mean value of 1.7%. The maximum reported NTCP for radiation-induced pneumonitis was 5.8% (mean 2.6%) after 1cm posterior isocentric shift. The NTCP for excess cardiac mortality were 0 % in 100% of the patients (n=12) before and after setup error simulations. Conclusions Set-up errors in left sided breast cancer patients have a statistically significant impact on the Lung NTCPs and DVH indices. However, with a central lung distance of 3cm or less (CLD <3cm), and a maximum heart distance of 1.5cm or less (MHD<1.5cm), the treatment plans could tolerate set-up errors of up to 1cm without any change in the NTCP to the heart.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anticipating the number and identity of bidders has significant influence in many theoretical results of the auction itself and bidders’ bidding behaviour. This is because when a bidder knows in advance which specific bidders are likely competitors, this knowledge gives a company a head start when setting the bid price. However, despite these competitive implications, most previous studies have focused almost entirely on forecasting the number of bidders and only a few authors have dealt with the identity dimension qualitatively. Using a case study with immediate real-life applications, this paper develops a method for estimating every potential bidder’s probability of participating in a future auction as a function of the tender economic size removing the bias caused by the contract size opportunities distribution. This way, a bidder or auctioner will be able to estimate the likelihood of a specific group of key, previously identified bidders in a future tender.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate determination of same-sex twin zygosity is important for medical, scientific and personal reasons. Determination may be based upon questionnaire data, blood group, enzyme isoforms and fetal membrane examination, but assignment of zygosity must ultimately be confirmed by genotypic data. Here methods are reviewed for calculating average probabilities of correctly concluding a twin pair is monozygotic, given they share the same genotypes across all loci for commonly utilized multiplex short tandem repeat (STR) kits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explored kindergarten students’ intuitive strategies and understandings in probabilities. The paper aims to provide an in depth insight into the levels of probability understanding across four constructs, as proposed by Jones (1997), for kindergarten students. Qualitative evidence from two students revealed that even before instruction pupils have a good capacity of predicting most and least likely events, of distinguishing fair probability situations from unfair ones, of comparing the probability of an event in two sample spaces, and of recognizing conditional probability events. These results contribute to the growing evidence on kindergarten students’ intuitive probabilistic reasoning. The potential of this study for improving the learning of probability, as well as suggestions for further research, are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most statistical methods use hypothesis testing. Analysis of variance, regression, discrete choice models, contingency tables, and other analysis methods commonly used in transportation research share hypothesis testing as the means of making inferences about the population of interest. Despite the fact that hypothesis testing has been a cornerstone of empirical research for many years, various aspects of hypothesis tests commonly are incorrectly applied, misinterpreted, and ignored—by novices and expert researchers alike. On initial glance, hypothesis testing appears straightforward: develop the null and alternative hypotheses, compute the test statistic to compare to a standard distribution, estimate the probability of rejecting the null hypothesis, and then make claims about the importance of the finding. This is an oversimplification of the process of hypothesis testing. Hypothesis testing as applied in empirical research is examined here. The reader is assumed to have a basic knowledge of the role of hypothesis testing in various statistical methods. Through the use of an example, the mechanics of hypothesis testing is first reviewed. Then, five precautions surrounding the use and interpretation of hypothesis tests are developed; examples of each are provided to demonstrate how errors are made, and solutions are identified so similar errors can be avoided. Remedies are provided for common errors, and conclusions are drawn on how to use the results of this paper to improve the conduct of empirical research in transportation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prior studies linking performance management systems (PMS) and organisational justice have examined how PMS influence procedural fairness. Our investigation differs from these studies. First, it examines fairness as an antecedent (instead of as a consequence) of the choice of PMS. Second, instead of conceptualising organisational fairness as procedural fairness, it relies on the impression management interpretation of organisational fairness. Hence, the study investigates how the need of senior managers to cultivate an impression of being fair is related to the choice of PMS systems and employee outcomes. Based on a sample of 276 employees, the results indicate that the need of senior management to cultivate an impression of being fair is associated with employee performance. They also indicate that a substantial component of these effects is indirect through the choice of comprehensive performance measures (CPM) and employee job satisfaction. These findings highlight the importance of organisational concern for workplace fairness as an antecedent of choice of CPM. From a theoretical perspective, the adoption of the impression management interpretation of organisational fairness contributes by providing new insights into the relationship between fairness and choice of PMS from a perspective that is different from those used in prior management accounting research.