824 resultados para Native Vegetation Condition, Benchmarking, Bayesian Decision Framework, Regression, Indicators
Resumo:
In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.
Resumo:
This paper elaborates on the Cybercars-2 Wireless Communication Framework for driverless city vehicles, which is used for Vehicle-to-Vehicle and Vehicle-to-Infrastructure communication. The developed framework improves the safety and efficiency of driverless city vehicles. Furthermore, this paper also elaborates on the vehicle control software architecture. On-road tests of both the communication framework and its application for real-time decision making show that the communication framework is reliable and useful for improving the safe operation of driverless city vehicles.
Resumo:
A commitment in 2010 by the Australian Federal Government to spend $466.7 million dollars on the implementation of personally controlled electronic health records (PCEHR) heralded a shift to a more effective and safer patient centric eHealth system. However, deployment of the PCEHR has met with much criticism, emphasised by poor adoption rates over the first 12 months of operation. An indifferent response by the public and healthcare providers largely sceptical of its utility and safety speaks to the complex sociotechnical drivers and obstacles inherent in the embedding of large (national) scale eHealth projects. With government efforts to inflate consumer and practitioner engagement numbers giving rise to further consumer disillusionment, broader utilitarian opportunities available with the PCEHR are at risk. This paper discusses the implications of establishing the PCEHR as the cornerstone of a holistic eHealth strategy for the aggregation of longitudinal patient information. A viewpoint is offered that the real value in patient data lies not just in the collection of data but in the integration of this information into clinical processes within the framework of a commoditised data-driven approach. Consideration is given to the eHealth-as-a-Service (eHaaS) construct as a disruptive next step for co-ordinated individualised healthcare in the Australian context.
Resumo:
Protocols for bioassessment often relate changes in summary metrics that describe aspects of biotic assemblage structure and function to environmental stress. Biotic assessment using multimetric indices now forms the basis for setting regulatory standards for stream quality and a range of other goals related to water resource management in the USA and elsewhere. Biotic metrics are typically interpreted with reference to the expected natural state to evaluate whether a site is degraded. It is critical that natural variation in biotic metrics along environmental gradients is adequately accounted for, in order to quantify human disturbance-induced change. A common approach used in the IBI is to examine scatter plots of variation in a given metric along a single stream size surrogate and a fit a line (drawn by eye) to form the upper bound, and hence define the maximum likely value of a given metric in a site of a given environmental characteristic (termed the 'maximum species richness line' - MSRL). In this paper we examine whether the use of a single environmental descriptor and the MSRL is appropriate for defining the reference condition for a biotic metric (fish species richness) and for detecting human disturbance gradients in rivers of south-eastern Queensland, Australia. We compare the accuracy and precision of the MSRL approach based on single environmental predictors, with three regression-based prediction methods (Simple Linear Regression, Generalised Linear Modelling and Regression Tree modelling) that use (either singly or in combination) a set of landscape and local scale environmental variables as predictors of species richness. We compared the frequency of classification errors from each method against set biocriteria and contrast the ability of each method to accurately reflect human disturbance gradients at a large set of test sites. The results of this study suggest that the MSRL based upon variation in a single environmental descriptor could not accurately predict species richness at minimally disturbed sites when compared with SLR's based on equivalent environmental variables. Regression-based modelling incorporating multiple environmental variables as predictors more accurately explained natural variation in species richness than did simple models using single environmental predictors. Prediction error arising from the MSRL was substantially higher than for the regression methods and led to an increased frequency of Type I errors (incorrectly classing a site as disturbed). We suggest that problems with the MSRL arise from the inherent scoring procedure used and that it is limited to predicting variation in the dependent variable along a single environmental gradient.
Resumo:
Natural resource management planning in the Northern Gulf region of Queensland is concerned with ‘how [natural assets] and community aspirations can be protected and enhanced to provide the Northern Gulf community with the economic, social and environmental means to meet the continuing growth of the region in an ecological and economically sustainable way’ (McDonald & Dawson 2004). In the Etheridge Shire, located in the tropical savanna of the Northern Gulf region, two of the activities that influence the balance between economic growth and long-term sustainable development are: 1. the land-use decisions people in the Shire make with regards to their own enterprises. 2. their decisions to engage in civically-minded activities aimed at improving conditions in the region. Land-use decision and engagement in community development activities were chosen for detailed analysis because they are activities for which policies can be devised to improve economic and sustainable development outcomes. Changing the formal and informal rules that guide and govern these two different kinds of decisions that people can make in the Etheridge Shire – the decision to improve one’s own situation and the decision to improve the situation for others in the community – may expand the set of available options for people in the Shire to achieve their goals and aspirations. Identifying appropriate and effective changes in rules requires, first, an understanding of the ‘action arena’, in this case comprised of a diversity of ‘participants’ from both within and outside the Etheridge Shire, and secondly knowledge of ‘action situations’ (land-use decisions and engagement in community development activities) in which stakeholders are involved and/or have a stake. These discussions are presented in sections 4.1.1.1 and 4.1.1.2.
Resumo:
Purpose To study the quality in higher education in Cambodia and explore the potential factors leading to quality in Cambodian higher education. Design/methodology/approach Five main factors that were deemed relevant in providing quality in Cambodian higher education were proposed: academic curriculum and extra-curricular activities, teachers' qualification and methods, funding and tuition, school facilities, and interactive network. These five propositions were used to compare Shu-Te University, Taiwan with the top five universities in Cambodia. The data came in the forms of questionnaire and desk research. Descriptive analytical approach is then carried out to describe these five factors. Findings Only 6 per cent of lecturers hold PhD degree and about 85 per cent never published any papers; some private universities charge as low as USD200 per academic year, there is almost no donation from international organizations, and annual government funding on higher education sector nationwide in 2005 was only about USD3.67 million; even though there is a library at each university, books, study materials etc. are not up-to-date and inadequate; 90 per cent of the lecturers never have technical discussion or meeting and about 60 per cent of students felt that their teachers did not have time for them to consult with. Originality/value A useful insight was gained into the perceived importance of quality in higher education that can stimulate debate and discussion on the role of government in building the standard quality in higher education. Also, the findings from this research can assist in the development of a framework of developing human resource.
Resumo:
Effective response by government and individuals to the risk of land degradation requires an understanding of regional climate variations and the impacts of climate and management on condition and productivity of land and vegetation resources. Analysis of past land degradation and climate variability provides some understanding of vulnerability to current and future climate changes and the information needs for more sustainable management. We describe experience in providing climate risk assessment information for managing for the risk of land degradation in north-eastern Australian arid and semi-arid regions used for extensive grazing. However, we note that information based on historical climate variability, which has been relied on in the past, will now also have to factor in the influence of human-induced climate change. Examples illustrate trends in climate for Australia over the past decade and the impacts on indicators of resource condition. The analysis highlights the benefits of insights into past trends and variability in rainfall and other climate variables based on extended historic databases. This understanding in turn supports more reliable regional climate projections and decision support information for governments and land managers to better manage the risk of land degradation now and in the future.
Resumo:
Age-related macular degeneration (AMD) affects the central vision and subsequently may lead to visual loss in people over 60 years of age. There is no permanent cure for AMD, but early detection and successive treatment may improve the visual acuity. AMD is mainly classified into dry and wet type; however, dry AMD is more common in aging population. AMD is characterized by drusen, yellow pigmentation, and neovascularization. These lesions are examined through visual inspection of retinal fundus images by ophthalmologists. It is laborious, time-consuming, and resource-intensive. Hence, in this study, we have proposed an automated AMD detection system using discrete wavelet transform (DWT) and feature ranking strategies. The first four-order statistical moments (mean, variance, skewness, and kurtosis), energy, entropy, and Gini index-based features are extracted from DWT coefficients. We have used five (t test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance, receiver operating characteristics curve-based, and Wilcoxon) feature ranking strategies to identify optimal feature set. A set of supervised classifiers namely support vector machine (SVM), decision tree, k -nearest neighbor ( k -NN), Naive Bayes, and probabilistic neural network were used to evaluate the highest performance measure using minimum number of features in classifying normal and dry AMD classes. The proposed framework obtained an average accuracy of 93.70 %, sensitivity of 91.11 %, and specificity of 96.30 % using KLD ranking and SVM classifier. We have also formulated an AMD Risk Index using selected features to classify the normal and dry AMD classes using one number. The proposed system can be used to assist the clinicians and also for mass AMD screening programs.
Resumo:
This thesis provided a definition and conceptual framework for hospital disaster resilience; it used a mixed-method, including an empirical study in tertiary hospitals of Shandong Province in China, to devise an assessment instrument for measuring hospital resilience. The instrument is the first of its type and will allow hospitals to measure their resilience levels. The concept of disaster resilience has gained prominence in the light of the increased impact of various disasters. The notion of resilience encompasses the qualities that enable the organisation or community to resist, respond to, and recover from the impact of disasters. Hospital resilience is essential as it provides 'lifeline' services which minimize disaster impact. This thesis has provided a framework and instrument to evaluate the level of hospital resilience. Such an instrument could be used to better understand hospital resilience, and also as a decision-support tool for its promoting strategies and policies.
Resumo:
A major challenge in studying coupled groundwater and surface-water interactions arises from the considerable difference in the response time scales of groundwater and surface-water systems affected by external forcings. Although coupled models representing the interaction of groundwater and surface-water systems have been studied for over a century, most have focused on groundwater quantity or quality issues rather than response time. In this study, we present an analytical framework, based on the concept of mean action time (MAT), to estimate the time scale required for groundwater systems to respond to changes in surface-water conditions. MAT can be used to estimate the transient response time scale by analyzing the governing mathematical model. This framework does not require any form of transient solution (either numerical or analytical) to the governing equation, yet it provides a closed form mathematical relationship for the response time as a function of the aquifer geometry, boundary conditions, and flow parameters. Our analysis indicates that aquifer systems have three fundamental time scales: (i) a time scale that depends on the intrinsic properties of the aquifer; (ii) a time scale that depends on the intrinsic properties of the boundary condition, and; (iii) a time scale that depends on the properties of the entire system. We discuss two practical scenarios where MAT estimates provide useful insights and we test the MAT predictions using new laboratory-scale experimental data sets.
Learned stochastic mobility prediction for planning with control uncertainty on unstructured terrain
Resumo:
Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This mobility prediction model is trained using sample executions of motion primitives on representative terrain, and predicts the future outcome of control actions on similar terrain. Using Gaussian process regression allows us to exploit its inherent measure of prediction uncertainty in planning. We integrate mobility prediction into a Markov decision process framework and use dynamic programming to construct a control policy for navigation to a goal region in a terrain map built using an on-board depth sensor. We consider both rigid terrain, consisting of uneven ground, small rocks, and non-traversable rocks, and also deformable terrain. We introduce two methods for training the mobility prediction model from either proprioceptive or exteroceptive observations, and report results from nearly 300 experimental trials using a planetary rover platform in a Mars-analogue environment. Our results validate the approach and demonstrate the value of planning under uncertainty for safe and reliable navigation.
Resumo:
The noble idea of studying seminal works to ‘see what we can learn’ has turned in the 1990s into ‘let’s see what we can take’ and in the last decade a more toxic derivative ‘what else can’t we take’. That is my observation as a student of architecture in the 1990s, and as a practitioner in the 2000s. In 2010, the sense that something is ending is clear. The next generation is rising and their gaze has shifted. The idea of classification (as a means of separation) was previously rejected by a generation of Postmodernists; the usefulness of difference declined. It’s there in the presence of plurality in the resulting architecture, a decision to mine history and seize in a willful manner. This is a process of looking back but never forward. It has been a mono-culture of absorption. The mono-culture rejected the pursuit of the realistic. It is a blanket suffocating all practice of architecture in this country from the mercantile to the intellectual. Independent reviews of Australia’s recent contributions to the Venice Architecture Biennales confirm the malaise. The next generation is beginning to reconsider classification as a means of unification. By acknowledging the characteristics of competing forces it is possible to bring them into a state of tension. Seeking a beautiful contrast is a means to a new end. In the political setting, this is described by Noel Pearson as the radical centre[1]. The concept transcends the political and in its most essential form is a cultural phenomenon. It resists the compromised position and suggests that we can look back while looking forward. The radical centre is the only demonstrated opportunity where it is possible to pursue a realistic architecture. A realistic architecture in Australia may be partially resolved by addressing our anxiety of permanence. Farrelly’s built desires[2] and Markham’s ritual demonstrations[3] are two ways into understanding the broader spectrum of permanence. But I think they are downstream of our core problem. Our problem, as architects, is that we are yet to come to terms with this place. Some call it landscape others call it country. Australian cities were laid out on what was mistaken for a blank canvas. On some occasions there was the consideration of the landscape when it presented insurmountable physical obstacles. The architecture since has continued to work on its piece of a constantly blank canvas. Even more ironic is the commercial awards programs that represent a claim within this framework but at best can only establish a dialogue within itself. This is a closed system unable to look forward. It is said that Melbourne is the most European city in the southern hemisphere but what is really being described there is the limitation of a senseless grid. After all, if Dutch landscape informs Dutch architecture why can’t the Australian landscape inform Australian architecture? To do that, we would have to acknowledge our moribund grasp of the meaning of the Australian landscape. Or more precisely what Indigenes call Country[4]. This is a complex notion and there are different ways into it. Country is experienced and understood through the senses and seared into memory. If one begins design at that starting point it is not unreasonable to think we can arrive at an end point that is a counter trajectory to where we have taken ourselves. A recent studio with Masters students confirmed this. Start by finding Country and it would be impossible to end up with a building looking like an Aboriginal man’s face. To date architecture in Australia has overwhelmingly ignored Country on the back of terra nullius. It can’t seem to get past the picturesque. Why is it so hard? The art world came to terms with this challenge, so too did the legal establishment, even the political scene headed into new waters. It would be easy to blame the budgets of commerce or the constraints of program or even the pressure of success. But that is too easy. Those factors are in fact the kind of limitations that opportunities grow out of. The past decade of economic plenty has, for the most part, smothered the idea that our capitals might enable civic settings or an architecture that is able to looks past lot line boundaries in a dignified manner. The denied opportunities of these settings to be prompted by the Country they occupy is criminal. The public realm is arrested in its development because we refuse to accept Country as a spatial condition. What we seem to be able to embrace is literal and symbolic gestures usually taking the form of a trumped up art installations. All talk – no action. To continue to leave the public realm to the stewardship of mercantile interests is like embracing derivative lending after the global financial crisis.Herein rests an argument for why we need a resourced Government Architect’s office operating not as an isolated lobbyist for business but as a steward of the public realm for both the past and the future. New South Wales is the leading model with Queensland close behind. That is not to say both do not have flaws but current calls for their cessation on the grounds of design parity poorly mask commercial self interest. In Queensland, lobbyists are heavily regulated now with an aim to ensure integrity and accountability. In essence, what I am speaking of will not be found in Reconciliation Action Plans that double as business plans, or the mining of Aboriginal culture for the next marketing gimmick, or even discussions around how to make buildings more ‘Aboriginal’. It will come from the next generation who reject the noxious mono-culture of absorption and embrace a counter trajectory to pursue an architecture of realism.
Resumo:
Approximate Bayesian Computation’ (ABC) represents a powerful methodology for the analysis of complex stochastic systems for which the likelihood of the observed data under an arbitrary set of input parameters may be entirely intractable – the latter condition rendering useless the standard machinery of tractable likelihood-based, Bayesian statistical inference [e.g. conventional Markov chain Monte Carlo (MCMC) simulation]. In this paper, we demonstrate the potential of ABC for astronomical model analysis by application to a case study in the morphological transformation of high-redshift galaxies. To this end, we develop, first, a stochastic model for the competing processes of merging and secular evolution in the early Universe, and secondly, through an ABC-based comparison against the observed demographics of massive (Mgal > 1011 M⊙) galaxies (at 1.5 < z < 3) in the Cosmic Assembly Near-IR Deep Extragalatic Legacy Survey (CANDELS)/Extended Groth Strip (EGS) data set we derive posterior probability densities for the key parameters of this model. The ‘Sequential Monte Carlo’ implementation of ABC exhibited herein, featuring both a self-generating target sequence and self-refining MCMC kernel, is amongst the most efficient of contemporary approaches to this important statistical algorithm. We highlight as well through our chosen case study the value of careful summary statistic selection, and demonstrate two modern strategies for assessment and optimization in this regard. Ultimately, our ABC analysis of the high-redshift morphological mix returns tight constraints on the evolving merger rate in the early Universe and favours major merging (with disc survival or rapid reformation) over secular evolution as the mechanism most responsible for building up the first generation of bulges in early-type discs.
Resumo:
Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.
Resumo:
A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.