921 resultados para Uncertainty Avoidance
Resumo:
We examine the role of politico-economic influences on macroeconomic performance within the framework of an endogenous growth model with costly technology adoption and uncertainty. The model is aimed at understanding the diversity in growth and inequality experiences across countries. Agents adopt either of two risky technologies, one of which is only available through financial intermediaries, who are able to alleviate some of this risk. The entry cost of financial intermediation depends on the proportion of government revenue that is allocated towards cost-reducing financial development expenditure, and agents vote on this proportion. The results show that agents at the top and bottom ends of the distribution prefer alternative means of re-distribution, thereby effectively blocking the allocation of resources towards cost-reducing financial development expenditure. Thus political factors have a role in delaying financial and capital deepening and economic development. Furthermore, the model provides a political-economy perspective on the Kuznets curve; uncertainty interacts with the political economy mechanism to produce transitional inequality patterns that, depending on initial conditions, can unearth the Kuznets-curve experience. Finally, the political outcomes are inefficient relative to policies aimed at maximizing the collective welfare of agents in the economy.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
Every motorised jurisdiction mandates legal driving behaviour which facilitates driver mobility and road user safety through explicit road rules that are enforced by regulatory authorities such as the Police. In road safety, traffic law enforcement has been very successfully applied to modify road user behaviour, and increasingly technology is fundamental in detecting illegal road user behaviour. Furthermore, there is also sound evidence that highly visible and/or intensive enforcement programs achieve long-term deterrent effects. To illustrate, in Australia random breath testing has considerably reduced the incidence and prevalence of driving whilst under the influence of alcohol. There is, however, evidence that many road rules continue to be broken, including speeding and using a mobile phone whilst driving, and there are many instances where drivers are not detected or sufficiently sanctioned for these transgressions. Furthermore, there is a growing body of evidence suggesting that experiences of punishment avoidance – that is, successful attempts at avoiding punishment such as drivers talking themselves out of a ticket, or changing driving routes to evade detection –are associated with and predictive of the extent of illegal driving behaviour and future illegal driving intentions. Therefore there is a need to better understand the phenomenon of punishment avoidance to enhance our traffic law enforcement procedures and therefore safety of all road users. This chapter begins with a review of the young driver road safety problem, followed by an examination of contemporary deterrence theory to enhance our understanding of both the experiences and implications of punishment avoidance in the road environment. It is noteworthy that in situations where detection and punishment remain relatively rare, such as on extensive road networks, the research evidence suggests that experiences of punishment avoidance may have a stronger influence upon risky driving behaviour than experiences of punishment. Finally, data from a case study examining the risky behaviour of young drivers will be presented, and the implications for ‘getting away with it’ will be discussed.
Resumo:
In this paper we focus specifically on explaining variation in core human values, and suggest that individual differences in values can be partially explained by personality traits and the perceived ability to manage emotions in the self and others (i.e. trait emotional intelligence). A sample of 209 university students was used to test hypotheses regarding several proposed direct and indirect relationships between personality traits, trait emotional intelligence and values. Consistent with the hypotheses, Harm Avoidance and Novelty Seeking were found to directly predict Hedonism, Conformity, and Stimulation. Harm Avoidance was also found to indirectly predict these values through the mediating effects of key subscales of trait emotional intelligence. Novelty Seeking was not found to be an indirect predictor of values. Results have implications for our understanding of the relationship between personality, trait emotional intelligence and values, and suggest a common basis in terms of approach and avoidance pathways.
Resumo:
Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.
Resumo:
This paper presents a new algorithm based on a Modified Particle Swarm Optimization (MPSO) to estimate the harmonic state variables in a distribution networks. The proposed algorithm performs the estimation for both amplitude and phase of each injection harmonic currents by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as the uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WTs). The main features of the proposed MPSO algorithm are usage of a primary and secondary PSO loop and applying the mutation function. The simulation results on 34-bus IEEE radial and a 70-bus realistic radial test networks are presented. The results demonstrate that the speed and the accuracy of the proposed Distribution Harmonic State Estimation (DHSE) algorithm are very excellent compared to the algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO, and Honey Bees Mating Optimization (HBMO).
Resumo:
This paper presents a new algorithm based on a Hybrid Particle Swarm Optimization (PSO) and Simulated Annealing (SA) called PSO-SA to estimate harmonic state variables in distribution networks. The proposed algorithm performs estimation for both amplitude and phase of each harmonic currents injection by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WT). The main feature of proposed PSO-SA algorithm is to reach quickly around the global optimum by PSO with enabling a mutation function and then to find that optimum by SA searching algorithm. Simulation results on IEEE 34 bus radial and a realistic 70-bus radial test networks are presented to demonstrate the speed and accuracy of proposed Distribution Harmonic State Estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO and Honey Bees Mating Optimization (HBMO) algorithm.
Resumo:
This paper presents the Mossman Mill District Practices Framework. It was developed in the Wet Tropics region within the Great Barrier Reef in north-eastern Australia to describe the environmental benefits of agricultural management practices for the sugar cane industry. The framework translates complex, unclear and overlapping environmental plans, policy and legal arrangements into a simple framework of management practices that landholders can use to improve their management actions. Practices range from those that are old or outdated through to aspirational practices that have the potential to achieve desired resource condition targets. The framework has been applied by stakeholders at multiple scales to better coordinate and integrate a range of policy arrangements to improve natural resource management. It has been used to structure monitoring and evaluation in order to underpin a more adaptive approach to planning at mill district and property scale. Potentially, the framework and approach can be applied across fields of planning where adaptive management is needed. It has the potential to overcome many of the criticisms of property-scale and regional Natural Resource Management.
Resumo:
Jackson (2005) developed a hybrid model of personality and learning, known as the learning styles profiler (LSP) which was designed to span biological, socio-cognitive, and experiential research foci of personality and learning research. The hybrid model argues that functional and dysfunctional learning outcomes can be best understood in terms of how cognitions and experiences control, discipline, and re-express the biologically based scale of sensation-seeking. In two studies with part-time workers undertaking tertiary education (N equals 137 and 58), established models of approach and avoidance from each of the three different research foci were compared with Jackson's hybrid model in their predictiveness of leadership, work, and university outcomes using self-report and supervisor ratings. Results showed that the hybrid model was generally optimal and, as hypothesized, that goal orientation was a mediator of sensation-seeking on outcomes (work performance, university performance, leader behaviours, and counterproductive work behaviour). Our studies suggest that the hybrid model has considerable promise as a predictor of work and educational outcomes as well as dysfunctional outcomes.
Resumo:
The Australian Taxation Office (AT)) attempted to challenge both the private equity fund reliance on double tax agreements and the assertion that profits were capital in nature in its dispute with private equity group TPG. Failure to resolve the dispute resulted in the ATO issuing two taxation determinations: TD 2010/20 which states that the general anti-avoidance provisions can apply to arrangements designed to alter the intended effect of Australia's international tax agreements net; and TD 2010/21 which states that the profits on the sale of shares in a company group acquired in a leveraged buyout is assessable income. The purpose of this article is to determine the effectiveness of the administrative rulings regime as a regulatory strategy. This article, by using the TPG-Myer scenario and subsequent tax determinations as a case study, collects qualitative data which is then analysed (and triangulated) using tonal and thematic analysis. Contemporaneous commentary of private equity stakeholders, tax professionals, and media observations are analysed and evaluated within a framework of responsive regulation and utilising the current ATO compliance model. Contrary to the stated purpose of the ATO rulings regime to alleviate complexities in Australian taxation law and provide certainty to taxpayers, and despite the de facto law status afforded these rulings, this study found that the majority of private equity stakeholders and their advisors perceived that greater uncertainty was created by the two determinations. Thus, this study found that in the context of private equity fund investors, a responsive regulation measure in the form of taxation determinations was not effective.
Resumo:
This paper provides a preliminary analysis of an autonomous uncooperative collision avoidance strategy for unmanned aircraft using image-based visual control. Assuming target detection, the approach consists of three parts. First, a novel decision strategy is used to determine appropriate reference image features to track for safe avoidance. This is achieved by considering the current rules of the air (regulations), the properties of spiral motion and the expected visual tracking errors. Second, a spherical visual predictive control (VPC) scheme is used to guide the aircraft along a safe spiral-like trajectory about the object. Lastly, a stopping decision based on thresholding a cost function is used to determine when to stop the avoidance behaviour. The approach does not require estimation of range or time to collision, and instead relies on tuning two mutually exclusive decision thresholds to ensure satisfactory performance.
Resumo:
Stormwater pollution is linked to stream ecosystem degradation. In predicting stormwater pollution, various types of modelling techniques are adopted. The accuracy of predictions provided by these models depends on the data quality, appropriate estimation of model parameters, and the validation undertaken. It is well understood that available water quality datasets in urban areas span only relatively short time scales unlike water quantity data, which limits the applicability of the developed models in engineering and ecological assessment of urban waterways. This paper presents the application of leave-one-out (LOO) and Monte Carlo cross validation (MCCV) procedures in a Monte Carlo framework for the validation and estimation of uncertainty associated with pollutant wash-off when models are developed using a limited dataset. It was found that the application of MCCV is likely to result in a more realistic measure of model coefficients than LOO. Most importantly, MCCV and LOO were found to be effective in model validation when dealing with a small sample size which hinders detailed model validation and can undermine the effectiveness of stormwater quality management strategies.
Learned stochastic mobility prediction for planning with control uncertainty on unstructured terrain
Resumo:
Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This mobility prediction model is trained using sample executions of motion primitives on representative terrain, and predicts the future outcome of control actions on similar terrain. Using Gaussian process regression allows us to exploit its inherent measure of prediction uncertainty in planning. We integrate mobility prediction into a Markov decision process framework and use dynamic programming to construct a control policy for navigation to a goal region in a terrain map built using an on-board depth sensor. We consider both rigid terrain, consisting of uneven ground, small rocks, and non-traversable rocks, and also deformable terrain. We introduce two methods for training the mobility prediction model from either proprioceptive or exteroceptive observations, and report results from nearly 300 experimental trials using a planetary rover platform in a Mars-analogue environment. Our results validate the approach and demonstrate the value of planning under uncertainty for safe and reliable navigation.
Resumo:
While existing multi-biometic Dempster-Shafer the- ory fusion approaches have demonstrated promising perfor- mance, they do not model the uncertainty appropriately, sug- gesting that further improvement can be achieved. This research seeks to develop a unified framework for multimodal biometric fusion to take advantage of the uncertainty concept of Dempster- Shafer theory, improving the performance of multi-biometric authentication systems. Modeling uncertainty as a function of uncertainty factors affecting the recognition performance of the biometric systems helps to address the uncertainty of the data and the confidence of the fusion outcome. A weighted combination of quality measures and classifiers performance (Equal Error Rate) are proposed to encode the uncertainty concept to improve the fusion. We also found that quality measures contribute unequally to the recognition performance, thus selecting only significant factors and fusing them with a Dempster-Shafer approach to generate an overall quality score play an important role in the success of uncertainty modeling. The proposed approach achieved a competitive performance (approximate 1% EER) in comparison with other Dempster-Shafer based approaches and other conventional fusion approaches.