846 resultados para Input-Output Modelling
Resumo:
This article explores the use of probabilistic classification, namely finite mixture modelling, for identification of complex disease phenotypes, given cross-sectional data. In particular, if focuses on posterior probabilities of subgroup membership, a standard output of finite mixture modelling, and how the quantification of uncertainty in these probabilities can lead to more detailed analyses. Using a Bayesian approach, we describe two practical uses of this uncertainty: (i) as a means of describing a person’s membership to a single or multiple latent subgroups and (ii) as a means of describing identified subgroups by patient-centred covariates not included in model estimation. These proposed uses are demonstrated on a case study in Parkinson’s disease (PD), where latent subgroups are identified using multiple symptoms from the Unified Parkinson’s Disease Rating Scale (UPDRS).
Resumo:
Traffic Simulation models tend to have their own data input and output formats. In an effort to standardise the input for traffic simulations, we introduce in this paper a set of data marts that aim to serve as a common interface between the necessaary data, stored in dedicated databases, and the swoftware packages, that require the input in a certain format. The data marts are developed based on real world objects (e.g. roads, traffic lights, controllers) rather than abstract models and hence contain all necessary information that can be transformed by the importing software package to their needs. The paper contains a full description of the data marts for network coding, simulation results, and scenario management, which have been discussed with industry partners to ensure sustainability.
Resumo:
Digital human modeling (DHM), as a convenient and cost-effective tool, is increasingly incorporated into product and workplace design. In product design, it is predominantly used for the development of driver-vehicle systems. Most digital human modeling software tools, such as JACK, RAMSIS and DELMIA HUMANBUILDER provide functions to predict posture and positions for drivers with selected anthropometry according to SAE (Society of Automotive Engineers) Recommended Practices and other ergonomics guidelines. However, few studies have presented 2nd row passenger postural information, and digital human modeling of these passenger postures cannot be performed directly using the existing driver posture prediction functions. In this paper, the significant studies related to occupant posture and modeling were reviewed and a framework of determinants of driver vs. 2nd row occupant posture modeling was extracted. The determinants which are regarded as input factors for posture modeling include target population anthropometry, vehicle package geometry and seat design variables as well as task definitions. The differences between determinants of driver and 2nd row occupant posture models are significant, as driver posture modeling is primarily based on the position of the foot on the accelerator pedal (accelerator actuation point AAP, accelerator heel point AHP) and the hands on the steering wheel (steering wheel centre point A-Point). The objectives of this paper are aimed to investigate those differences between driver and passenger posture, and to supplement the existing parametric model for occupant posture prediction. With the guide of the framework, the associated input parameters of occupant digital human models of both driver and second row occupant will be identified. Beyond the existing occupant posture models, for example a driver posture model could be modified to predict second row occupant posture, by adjusting the associated input parameters introduced in this paper. This study combines results from a literature review and the theoretical modeling stage of a second row passenger posture prediction model project.
Resumo:
A critical step in the dissemination of ovarian cancer is the formation of multicellular spheroids from cells shed from the primary tumour. The objectives of this study were to apply bioengineered three-dimensional (3D) microenvironments for culturing ovarian cancer spheroids in vitro and simultaneously to build on a mathematical model describing the growth of multicellular spheroids in these biomimetic matrices. Cancer cells derived from human epithelial ovarian carcinoma were embedded within biomimetic hydrogels of varying stiffness and grown for up to 4 weeks. Immunohistochemistry, imaging and growth analyses were used to quantify the dependence of cell proliferation and apoptosis on matrix stiffness, long-term culture and treatment with the anti-cancer drug paclitaxel. The mathematical model was formulated as a free boundary problem in which each spheroid was treated as an incompressible porous medium. The functional forms used to describe the rates of cell proliferation and apoptosis were motivated by the experimental work and predictions of the mathematical model compared with the experimental output. This work aimed to establish whether it is possible to simulate solid tumour growth on the basis of data on spheroid size, cell proliferation and cell death within these spheroids. The mathematical model predictions were in agreement with the experimental data set and simulated how the growth of cancer spheroids was influenced by mechanical and biochemical stimuli including matrix stiffness, culture duration and administration of a chemotherapeutic drug. Our computational model provides new perspectives on experimental results and has informed the design of new 3D studies of chemoresistance of multicellular cancer spheroids.
Resumo:
This paper presents a formal methodology for attack modeling and detection for networks. Our approach has three phases. First, we extend the basic attack tree approach 1 to capture (i) the temporal dependencies between components, and (ii) the expiration of an attack. Second, using the enhanced attack trees (EAT) we build a tree automaton that accepts a sequence of actions from input stream if there is a traverse of an attack tree from leaves to the root node. Finally, we show how to construct an enhanced parallel automaton (EPA) that has each tree automaton as a subroutine and can process the input stream by considering multiple trees simultaneously. As a case study, we show how to represent the attacks in IEEE 802.11 and construct an EPA for it.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
Deterministic computer simulation of physical experiments is now a common technique in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This paper aims to discuss some practical issues when designing a computer simulation and/or experiments for manufacturing systems. A case study approach is reviewed and presented.
Resumo:
A demo video showing the BPMVM prototype using several natural user interfaces, such as multi-touch input, full-body tracking and virtual reality.
Resumo:
Keeping exotic plant pests out of our country relies on good border control or quarantine. However with increasing globalization and mobilization some things slip through. Then the back up systems become important. This can include an expensive form of surveillance that purposively targets particular pests. A much wider net is provided by general surveillance, which is assimilated into everyday activities, like farmers checking the health of their crops. In fact farmers and even home gardeners have provided a front line warning system for some pests (eg European wasp) that could otherwise have wreaked havoc. Mathematics is used to model how surveillance works in various situations. Within this virtual world we can play with various surveillance and management strategies to "see" how they would work, or how to make them work better. One of our greatest challenges is estimating some of the input parameters : because the pest hasn't been here before, it's hard to predict how well it might behave: establishing, spreading, and what types of symptoms it might express. So we rely on experts to help us with this. This talk will look at the mathematical, psychological and logical challenges of helping experts to quantify what they think. We show how the subjective Bayesian approach is useful for capturing expert uncertainty, ultimately providing a more complete picture of what they think... And what they don't!
Resumo:
Double-pass counter flow v-grove collector is considered one of the most efficient solar air-collectors. In this design of the collector, the inlet air initially flows at the top part of the collector and changes direction once it reaches the end of the collector and flows below the collector to the outlet. A mathematical model is developed for this type of collector and simulation is carried out using MATLAB programme. The simulation results were verified with three distinguished research results and it was found that the simulation has the ability to predict the performance of the air collector accurately as proven by the comparison of experimental data with simulation. The difference between the predicted and experimental results is, at maximum, approximately 7% which is within the acceptable limit considering some uncertainties in the input parameter values to allow comparison. A parametric study was performed and it was found that solar radiation, inlet air temperature, flow rate and length have a significant effect on the efficiency of the air collector. Additionally, the results are compared with single flow V-groove collector.
Resumo:
Biodiesel, produced from renewable feedstock represents a more sustainable source of energy and will therefore play a significant role in providing the energy requirements for transportation in the near future. Chemically, all biodiesels are fatty acid methyl esters (FAME), produced from raw vegetable oil and animal fat. However, clear differences in chemical structure are apparent from one feedstock to the next in terms of chain length, degree of unsaturation, number of double bonds and double bond configuration-which all determine the fuel properties of biodiesel. In this study, prediction models were developed to estimate kinematic viscosity of biodiesel using an Artificial Neural Network (ANN) modelling technique. While developing the model, 27 parameters based on chemical composition commonly found in biodiesel were used as the input variables and kinematic viscosity of biodiesel was used as output variable. Necessary data to develop and simulate the network were collected from more than 120 published peer reviewed papers. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture and learning algorithm were optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the coefficient of determination (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found high predictive accuracy of the ANN in predicting fuel properties of biodiesel and has demonstrated the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties. Therefore the model developed in this study can be a useful tool to accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.
Resumo:
Executive Summary Emergency Departments (EDs) locally, nationally and internationally are becoming increasingly busy. Within this context, it can be challenging to deliver a health service that is safe, of high quality and cost-effective. Whilst various models are described within the literature that aim to measure ED ‘work’ or ‘activity’, they are often not linked to a measure of costs to provide such activity. It is important for hospital and ED managers to understand and apply this link so that optimal staffing and financial resourcing can be justifiably sought. This research is timely given that Australia has moved towards a national Activity Based Funding (ABF) model for ED activity. ABF is believed to increase transparency of care and fairness (i.e. equal work receives equal pay). ABF involves a person-, performance- or activity-based payment system, and thus a move away from historical “block payment” models that do not incentivise efficiency and quality. The aim of the Statewide Workforce and Activity-Based Funding Modelling Project in Queensland Emergency Departments (SWAMPED) is to identify and describe best practice Emergency Department (ED) workforce models within the current context of ED funding that operates under an ABF model. The study is comprised of five distinct phases. This monograph (Phase 1) comprises a systematic review of the literature that was completed in June 2013. The remaining phases include a detailed survey of Queensland hospital EDs’ resource levels, activity and operational models of care, development of new resource models, development of a user-friendly modelling interface for ED mangers, and production of a final report that identifies policy implications. The anticipated deliverable outcome of this research is the development of an ABF based Emergency Workforce Modelling Tool that will enable ED managers to profile both their workforce and operational models of care. Additionally, the tool will assist with the ability to more accurately inform adequate staffing numbers required in the future, inform planning of expected expenditures and be used for standardisation and benchmarking across similar EDs. Summary of the Findings Within the remit of this review of the literature, the main findings include: 1. EDs are becoming busier and more congested Rising demand, barriers to ED throughput and transitions of care all contribute to ED congestion. In addition requests by organisational managers and the community require continued broadening of the scope of services required of the ED and further increases in demand. As the population live longer with more lifestyle diseases their propensity to require ED care continues to grow. 2. Various models of care within EDs exist Models often vary to account for site specific characteritics to suit staffing profile, ED geographical location (e.g. metropolitan or rural site), and patient demographic profile (e.g. paediatrics, older persons, ethnicity). Existing and new models implemented within EDs often depend on the target outcome requiring change. Generally this is focussed on addressing issues at the input, throughput or output areas of the ED. Even with models targeting similar demographic or illness, the structure and process elements underpinning the model can vary, which can impact on outcomes and variance to the patient and carer experience between and within EDs. Major models of care to manage throughput inefficiencies include: A. Workforce Models of Care focus on the appropriate level of staffing for a given workload to provide prompt, timely and clinically effective patient care within an emergency care setting. The studies reviewed suggest that the early involvement of senior medical decision maker and/or specialised nursing roles such as Emergency Nurse Practitioners and Clinical Initiatives Nurse, primary contact or extended scope Allied Health Practitioners can facilitate patient flow and improve key indicators such as length of stay and reducing the number of those who did not wait to be seen amongst others. B. Operational Models of Care within EDs focus on mechanisms for streaming (e.g. fast-tracking) or otherwise grouping patient care based on acuity and complexity to assist with minimising any throughput inefficiencies. While studies support the positive impact of these models in general, it appears that they are most effective when they are adequately resourced. 3. Various methods of measuring ED activity exist Measuring ED activity requires careful consideration of models of care and staffing profile. Measuring activity requires the ability to account for factors including: patient census, acuity, LOS, intensity of intervention, department skill-mix plus an adjustment for non-patient care time. 4. Gaps in the literature Continued ED growth calls for new and innovative care delivery models that are safe, clinically effective and cost effective. New roles and stand-alone service delivery models are often evaluated in isolation without considering the global and economic impact on staffing profiles. Whilst various models of accounting for and measuring health care activity exist, costing studies and cost effectiveness studies are lacking for EDs making accurate and reliable assessments of care models difficult. There is a necessity to further understand, refine and account for measures of ED complexity that define a workload upon which resources and appropriate staffing determinations can be made into the future. There is also a need for continued monitoring and comprehensive evaluation of newly implemented workforce modelling tools. This research acknowledges those gaps and aims to: • Undertake a comprehensive and integrated whole of department workforce profiling exercise relative to resources in the context of ABF. • Inform workforce requirements based on traditional quantitative markers (e.g. volume and acuity) combined with qualitative elements of ED models of care; • Develop a comprehensive and validated workforce calculation tool that can be used to better inform or at least guide workforce requirements in a more transparent manner.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
In this paper, we identify two types of injustice as antecedents of abusive supervision and ultimately of subordinate psychological distress and insomnia. We examine distributive justice (an individual's evaluation of their input to output ratio compared to relevant others) and interactional injustice (the quality of interpersonal treatment received when procedures are implemented). Using a sample of Filipinos in a variety of occupations, we identify two types of injustice experienced by supervisors as stressors that provoke them to display abusive supervision to their subordinates. We examine two consequences of abusive supervision - subordinate psychological distress and insomnia. In addition, we identify two moderators of these relationships, namely, supervisor distress and subordinate self-esteem. We collected survey data from multiple sources including subordinates, their supervisors, and their partners. Data were obtained from 175 matched supervisor-subordinate dyads over a 6-month period, with subordinates' partners providing ratings of insomnia. Results of structural equation modelling analyses provided support for an indirect effects model in which supervisors' experience of unfair treatment cascades down the organization, resulting in subordinate psychological distress and, ultimately in their insomnia. In addition, results partially supported the proposed moderated relationships in the cascading model. © 2010 Taylor & Francis.
Resumo:
A bioeconomic model was developed to evaluate the potential performance of brown tiger prawn stock enhancement in Exmouth Gulf, Australia. This paper presents the framework for the bioeconomic model and risk assessment for all components of a stock enhancement operation, i.e. hatchery, grow-out, releasing, population dynamics, fishery, and monitoring, for a commercial scale enhancement of about 100 metric tonnes, a 25% increase in average annual catch in Exmouth Gulf. The model incorporates uncertainty in estimates of parameters by using a distribution for the parameter over a certain range, based on experiments, published data, or similar studies. Monte Carlo simulation was then used to quantify the effects of these uncertainties on the model-output and on the economic potential of a particular production target. The model incorporates density-dependent effects in the nursery grounds of brown tiger prawns. The results predict that a release of 21 million 1 g prawns would produce an estimated enhanced prawn catch of about 100 t. This scale of enhancement has a 66.5% chance of making a profit. The largest contributor to the overall uncertainty of the enhanced prawn catch was the post-release mortality, followed by the density-dependent mortality caused by released prawns. These two mortality rates are most difficult to estimate in practice and are much under-researched in stock enhancement.