958 resultados para Standard models
Resumo:
Physical access control systems play a central role in the protection of critical infrastructures, where both the provision of timely access and preserving the security of sensitive areas are paramount. In this paper we discuss the shortcomings of existing approaches to the administration of physical access control in complex environments. At the heart of the problem is the current dependency on human administrators to reason about the implications of the provision or the revocation of staff access to an area within these facilities. We demonstrate how utilising Building Information Models (BIMs) and the capabilities they provide, including 3D representation of a facility and path-finding can reduce possible intentional or accidental errors made by security administrators.
Resumo:
Recent efforts in mission planning for underwater vehicles have utilised predictive models to aid in navigation, optimal path planning and drive opportunistic sampling. Although these models provide information at a unprecedented resolutions and have proven to increase accuracy and effectiveness in multiple campaigns, most are deterministic in nature. Thus, predictions cannot be incorporated into probabilistic planning frameworks, nor do they provide any metric on the variance or confidence of the output variables. In this paper, we provide an initial investigation into determining the confidence of ocean model predictions based on the results of multiple field deployments of two autonomous underwater vehicles. For multiple missions conducted over a two-month period in 2011, we compare actual vehicle executions to simulations of the same missions through the Regional Ocean Modeling System in an ocean region off the coast of southern California. This comparison provides a qualitative analysis of the current velocity predictions for areas within the selected deployment region. Ultimately, we present a spatial heat-map of the correlation between the ocean model predictions and the actual mission executions. Knowing where the model provides unreliable predictions can be incorporated into planners to increase the utility and application of the deterministic estimations.
Resumo:
Here we present a sequential Monte Carlo approach to Bayesian sequential design for the incorporation of model uncertainty. The methodology is demonstrated through the development and implementation of two model discrimination utilities; mutual information and total separation, but it can also be applied more generally if one has different experimental aims. A sequential Monte Carlo algorithm is run for each rival model (in parallel), and provides a convenient estimate of the marginal likelihood (of each model) given the data, which can be used for model comparison and in the evaluation of utility functions. A major benefit of this approach is that it requires very little problem specific tuning and is also computationally efficient when compared to full Markov chain Monte Carlo approaches. This research is motivated by applications in drug development and chemical engineering.
Resumo:
Australian higher education institutions (HEIs) have entered a new phase of regulation and accreditation which includes performance-based funding relating to the participation and retention of students from social and cultural groups previously underrepresented in higher education. However, in addressing these priorities, it is critical that HEIs do not further disadvantage students from certain groups by identifying them for attention because of their social or cultural backgrounds, circumstances which are largely beyond the control of students. In response, many HEIs are focusing effort on university-wide approaches to enhancing the student experience because such approaches will enhance the engagement, success and retention of all students, and in doing so, particularly benefit those students who come from underrepresented groups. Measuring and benchmarking student experiences and engagement that arise from these efforts is well supported by extensive collections of student experience survey data. However no comparable instrument exists that measures the capability of institutions to influence and/or enhance student experiences where capability is an indication of how well an organisational process does what it is designed to do (Rosemann & de Bruin, 2005). This paper proposes that the concept of a maturity model (Marshall, 2010; Paulk, 1999) may be useful as a way of assessing the capability of HEIs to provide and implement student engagement, success and retention activities. We will describe the Student Engagement, Success and Retention Maturity Model (SESR-MM), (Clarke, Nelson & Stoodley, 2012; Nelson, Clarke & Stoodley, 2012) we are currently investigating. We will discuss if our research may address the current gap by facilitating the development of an SESR-MM instrument that aims (i) to enable institutions to assess the capability of their current student engagement and retention programs and strategies to influence and respond to student experiences within the institution; and (ii) to provide institutions with the opportunity to understand various practices across the sector with a view to further improving programs and practices relevant to their context. The first aim of our research is to extend the generational approach which has been useful in considering the evolutionary nature of the first year experience (FYE) (Wilson, 2009). Three generations have been identified and explored: First generation approaches that focus on co-curricular strategies (e.g. orientation and peer programs); Second generation approaches that focus on curriculum (e.g. pedagogy, curriculum design, and learning and teaching practice); and third generation approaches—also referred to as transition pedagogy—that focus on the production of an institution-wide integrated holistic intentional blend of curricular and co-curricular activities (Kift, Nelson & Clarke, 2010). The second aim of this research is to move beyond assessments of students’ experiences to focus on assessing institutional processes and their capability to influence student engagement. In essence, we propose to develop and use the maturity model concept to produce an instrument that will indicate the capability of HEIs to manage and improve student engagement, success and retention programs and strategies. References Australian Council for Educational Research. (n.d.). Australasian Survey of Student Engagement. Retrieved from http://www.acer.edu.au/research/ausse/background Clarke, J., Nelson, K., & Stoodley, I. (2012, July). The Maturity Model concept as framework for assessing the capability of higher education institutions to address student engagement, success and retention: New horizon or false dawn? A Nuts & Bolts presentation at the 15th International Conference on the First Year in Higher Education, “New Horizons,” Brisbane, Australia. Kift, S., Nelson, K., & Clarke, J. (2010) Transition pedagogy - a third generation approach to FYE: A case study of policy and practice for the higher education sector. The International Journal of the First Year in Higher Education, 1(1), pp. 1-20. Department of Education, Employment and Workplace Relations. (n.d.). The University Experience Survey. Advancing quality in higher education information sheet. Retrieved from http://www.deewr.gov.au/HigherEducation/Policy/Documents/University_Experience_Survey.pdf Marshall, S. (2010). A quality framework for continuous improvement of e-Learning: The e-Learning Maturity Model. Journal of Distance Education, 24(1), 143-166. Nelson, K., Clarke, J., & Stoodley, I. (2012). An exploration of the Maturity Model concept as a vehicle for higher education institutions to assess their capability to address student engagement. A work in progress. Submitted for publication. Paulk, M. (1999). Using the Software CMM with good judgment, ASQ Software Quality Professional, 1(3), 19-29. Wilson, K. (2009, June–July). The impact of institutional, programmatic and personal interventions on an effective and sustainable first-year student experience. Keynote address presented at the 12th Pacific Rim First Year in Higher Education Conference, “Preparing for Tomorrow Today: The First Year as Foundation,” Townsville, Australia. Retrieved from http://www.fyhe.com.au/past_papers/papers09/ppts/Keithia_Wilson_paper.pdf
Resumo:
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.
Resumo:
Recently, ‘business model’ and ‘business model innovation’ have gained substantial attention in management literature and practice. However, many firms lack the capability to develop a novel business model to capture the value from new technologies. Existing literature on business model innovation highlights the central role of ‘customer value’. Further, it suggests that firms need to experiment with different business models and engage in ‘trail-and-error’ learning when participating in business model innovation. Trial-and error processes and prototyping with tangible artifacts are a fundamental characteristic of design. This conceptual paper explores the role of design-led innovation in facilitating firms to conceive and prototype novel and meaningful business models. It provides a brief review of the conceptual discussion on business model innovation and highlights the opportunities for linking it with the research stream of design-led innovation. We propose design-led business model innovation as a future research area and highlight the role of design-led prototyping and new types of artifacts and prototypes play within it. We present six propositions in order to outline future research avenues.
Resumo:
The identification of the primary drivers of stock returns has been of great interest to both financial practitioners and academics alike for many decades. Influenced by classical financial theories such as the CAPM (Sharp, 1964; Lintner, 1965) and APT (Ross, 1976), a linear relationship is conventionally assumed between company characteristics as derived from their financial accounts and forward returns. Whilst this assumption may be a fair approximation to the underlying structural relationship, it is often adopted for the purpose of convenience. It is actually quite rare that the assumptions of distributional normality and a linear relationship are explicitly assessed in advance even though this information would help to inform the appropriate choice of modelling technique. Non-linear models have nevertheless been applied successfully to the task of stock selection in the past (Sorensen et al, 2000). However, their take-up by the investment community has been limited despite the fact that researchers in other fields have found them to be a useful way to express knowledge and aid decision-making...
Resumo:
One of the fundamental econometric models in finance is predictive regression. The standard least squares method produces biased coefficient estimates when the regressor is persistent and its innovations are correlated with those of the dependent variable. This article proposes a general and convenient method based on the jackknife technique to tackle the estimation problem. The proposed method reduces the bias for both single- and multiple-regressor models and for both short- and long-horizon regressions. The effectiveness of the proposed method is demonstrated by simulations. An empirical application to equity premium prediction using the dividend yield and the short rate highlights the differences between the results by the standard approach and those by the bias-reduced estimator. The significant predictive variables under the ordinary least squares become insignificant after adjusting for the finite-sample bias. These discrepancies suggest that bias reduction in predictive regressions is important in practical applications.
Resumo:
We consider quantile regression models and investigate the induced smoothing method for obtaining the covariance matrix of the regression parameter estimates. We show that the difference between the smoothed and unsmoothed estimating functions in quantile regression is negligible. The detailed and simple computational algorithms for calculating the asymptotic covariance are provided. Intensive simulation studies indicate that the proposed method performs very well. We also illustrate the algorithm by analyzing the rainfall–runoff data from Murray Upland, Australia.
Resumo:
The standard Exeter stem has a length of 150mm with offsets 37.5mm to 56mm. Shorter stems of lengths 95mm, 115mm and 125mm with offsets 35.5mm or less are available for patients with smaller femurs. Concern has been raised regarding the behaviour of the smaller implants. This paper analysed data from the Australian Orthopaedic Association National Joint Replacement Registry comparing survivorship of stems of offset 35.5mm or less with the standard stems of 37.5mm offset or greater. At seven years there was no significant difference in the Cumulative Percent Revision Rate in the short stems (3.4%, 95% CI 2.4-4.8%) compared with the standard length stems (3.5%, 95% CI 3.3-3.8%) despite its use in a greater proportion of potentially more difficult developmental dysplasia of the hip cases.
Resumo:
Animal models typically require a known genetic pedigree to estimate quantitative genetic parameters. Here we test whether animal models can alternatively be based on estimates of relatedness derived entirely from molecular marker data. Our case study is the morphology of a wild bird population, for which we report estimates of the genetic variance-covariance matrices (G) of six morphological traits using three methods: the traditional animal model; a molecular marker-based approach to estimate heritability based on Ritland's pairwise regression method; and a new approach using a molecular genealogy arranged in a relatedness matrix (R) to replace the pedigree in an animal model. Using the traditional animal model, we found significant genetic variance for all six traits and positive genetic covariance among traits. The pairwise regression method did not return reliable estimates of quantitative genetic parameters in this population, with estimates of genetic variance and covariance typically being very small or negative. In contrast, we found mixed evidence for the use of the pedigree-free animal model. Similar to the pairwise regression method, the pedigree-free approach performed poorly when the full-rank R matrix based on the molecular genealogy was employed. However, performance improved substantially when we reduced the dimensionality of the R matrix in order to maximize the signal to noise ratio. Using reduced-rank R matrices generated estimates of genetic variance that were much closer to those from the traditional model. Nevertheless, this method was less reliable at estimating covariances, which were often estimated to be negative. Taken together, these results suggest that pedigree-free animal models can recover quantitative genetic information, although the signal remains relatively weak. It remains to be determined whether this problem can be overcome by the use of a more powerful battery of molecular markers and improved methods for reconstructing genealogies.
Resumo:
Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers of motor vehicles exhibit safe behaviours. Several car-following models are used in various micro-simulation models. This research compares the mainstream car following models’ capabilities of emulating precise driver behaviour parameters such as headways and Time to Collisions. The comparison firstly illustrates which model is more robust in the metric reproduction. Secondly, the study conducted a series of sensitivity tests to further explore the behaviour of each model. Based on the outcome of these two steps exploration of the models, a modified structure and parameters adjustment for each car-following model is proposed to simulate more realistic vehicle movements, particularly headways and Time to Collision, below a certain critical threshold. NGSIM vehicle trajectory data is used to evaluate the modified models performance to assess critical safety events within traffic flow. The simulation tests outcomes indicate that the proposed modified models produce better frequency of critical Time to Collision than the generic models, while the improvement on the headway is not significant. The outcome of this paper facilitates traffic safety assessment using microscopic simulation.
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
Cell invasion involves a population of cells that migrate along a substrate and proliferate to a carrying capacity density. These two processes, combined, lead to invasion fronts that move into unoccupied tissues. Traditional modelling approaches based on reaction–diffusion equations cannot incorporate individual–level observations of cell velocity, as information propagates with infinite velocity according to these parabolic models. In contrast, velocity jump processes allow us to explicitly incorporate individual–level observations of cell velocity, thus providing an alternative framework for modelling cell invasion. Here, we introduce proliferation into a standard velocity–jump process and show that the standard model does not support invasion fronts. Instead, we find that crowding effects must be explicitly incorporated into a proliferative velocity–jump process before invasion fronts can be observed. Our observations are supported by numerical and analytical solutions of a novel coupled system of partial differential equations, including travelling wave solutions, and associated random walk simulations.
Resumo:
In various industrial and scientific fields, conceptual models are derived from real world problem spaces to understand and communicate containing entities and coherencies. Abstracted models mirror the common understanding and information demand of engineers, who apply conceptual models for performing their daily tasks. However, most standardized models in Process Management, Product Lifecycle Management and Enterprise Resource Planning lack of a scientific foundation for their notation. In collaboration scenarios with stakeholders from several disciplines, tailored conceptual models complicate communication processes, as a common understanding is not shared or implemented in specific models. To support direct communication between experts from several disciplines, a visual language is developed which allows a common visualization of discipline-specific conceptual models. For visual discrimination and to overcome visual complexity issues, conceptual models are arranged in a three-dimensional space. The visual language introduced here follows and extends established principles of Visual Language science.