36 resultados para Discrete choice analysis
Resumo:
Since 1994, Canada, the United Kingdom and Australia have adopted new choice of law rules for cross-border torts that, in different ways, centre on the application of the law of the place where the tort occurred (the lex loci delicti). All three countries abandoned some species of the rule in Phillips v Eyre, which required some reference to the law of the forum (the lex fori) as well as the lex loci delicti. However, predictions were made that, where possible, courts in these countries would continue to show a strong inclination to apply the lex fori in cross-border tort cases - and would use a range of homing devices to do so. A comprehensive survey and analysis of the cases that have been decided under the Australian, British and Canadian lex loci delicti regimes suggests that courts in these countries do betray a homing instinct, but one that has actually been tightly restrained by appeal courts. Where application of the lex fori was formally allowed by use of a 'flexible exception' in Canada and the United Kingdom, this has been contained by courts of first appeal. Indeed, only the continuing characterization of the assessment of damages as a procedural question in Canada and the United Kingdom, seems to remain as a significant homing device for courts in these countries. © 2006 Oxford University Press.
Resumo:
Objective: It is usual that data collected from routine clinical care is sparse and unable to support the more complex pharmacokinetic (PK) models that may have been reported in previous rich data studies. Informative priors may be a pre-requisite for model development. The aim of this study was to estimate the population PK parameters of sirolimus using a fully Bayesian approach with informative priors. Methods: Informative priors including prior mean and precision of the prior mean were elicited from previous published studies using a meta-analytic technique. Precision of between-subject variability was determined by simulations from a Wishart distribution using MATLAB (version 6.5). Concentration-time data of sirolimus retrospectively collected from kidney transplant patients were analysed using WinBUGS (version 1.3). The candidate models were either one- or two-compartment with first order absorption and first order elimination. Model discrimination was based on computation of the posterior odds supporting the model. Results: A total of 315 concentration-time points were obtained from 25 patients. Most data were clustered at trough concentrations with range of 1.6 to 77 hours post-dose. Using informative priors, either a one- or two-compartment model could be used to describe the data. When a one-compartment model was applied, information was gained from the data for the value of apparent clearance (CL/F = 18.5 L/h), and apparent volume of distribution (V/F = 1406 L) but no information was gained about the absorption rate constant (ka). When a two-compartment model was fitted to the data, the data were informative about CL/F, apparent inter-compartmental clearance, and apparent volume of distribution of the peripheral compartment (13.2 L/h, 20.8 L/h, and 579 L, respectively). The posterior distribution of the volume distribution of central compartment and ka were the same as priors. The posterior odds for the two-compartment model was 8.1, indicating the data supported the two-compartment model. Conclusion: The use of informative priors supported the choice of a more complex and informative model that would otherwise have not been supported by the sparse data.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.
Resumo:
The aim of this study was to examine the way Australian air traffic controllers manage their airspace. Fourteen controllers ranging from 7 to 30 years experience were sampled from the Brisbane air traffic control centre. All had previously been endorsed for en route radar sectors. Five static pictures varying in workload level (low, medium and high) were presented to participants. Controllers were asked to work through the scenarios and describe aloud how they would resolve any potential conflicts between the aircraft. Following this controllers were asked a set of probe questions based on the critical decision method, to extract further information about the way they manage their airspace. A content analysis was used to assess patterns in the way controllers scan, strategies used in conflict detection and conflict resolution and the effect of workload on strategy choice. Findings revealed that controllers use specific strategies (such as working in a left to right scan or prioritising levels) when managing their airspace. Further analyses are still planned however a model based on the processes controllers used to resolve conflicts has been developed and will be presented as a summary of the results.
Resumo:
Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.
Resumo:
A program can be decomposed into a set of possible execution paths. These can be described in terms of primitives such as assignments, assumptions and coercions, and composition operators such as sequential composition and nondeterministic choice as well as finitely or infinitely iterated sequential composition. Some of these paths cannot possibly be followed (they are dead or infeasible), and they may or may not terminate. Decomposing programs into paths provides a foundation for analyzing properties of programs. Our motivation is timing constraint analysis of real-time programs, but the same techniques can be applied in other areas such as program testing. In general the set of execution paths for a program is infinite. For timing analysis we would like to decompose a program into a finite set of subpaths that covers all possible execution paths, in the sense that we only have to analyze the subpaths in order to determine suitable timing constraints that cover all execution paths.