674 resultados para Situation models
Resumo:
Overall, computer models and simulations have a rather disappointing record within the management sciences as a tool for predicting the future. Social and market environments can be influenced by an overwhelming number of variables, and it is therefore difficult to use computer models to make forecasts or to test hypotheses concerning the relationship between individual behaviours and macroscopic outcomes. At the same time, however, advocates of computer models argue that they can be used to overcome the human mind's inability to cope with several complex variables simultaneously or to understand concepts that are highly counterintuitive. This paper seeks to bridge the gap between these two perspectives by suggesting that management research can indeed benefit from computer models by using them to formulate fruitful hypotheses.
Resumo:
We consider a robust filtering problem for uncertain discrete-time, homogeneous, first-order, finite-state hidden Markov models (HMMs). The class of uncertain HMMs considered is described by a conditional relative entropy constraint on measures perturbed from a nominal regular conditional probability distribution given the previous posterior state distribution and the latest measurement. Under this class of perturbations, a robust infinite horizon filtering problem is first formulated as a constrained optimization problem before being transformed via variational results into an unconstrained optimization problem; the latter can be elegantly solved using a risk-sensitive information-state based filtering.
Resumo:
A time series method for the determination of combustion chamber resonant frequencies is outlined. This technique employs the use of Markov-chain Monte Carlo (MCMC) to infer parameters in a chosen model of the data. The development of the model is included and the resonant frequency is characterised as a function of time. Potential applications for cycle-by-cycle analysis are discussed and the bulk temperature of the gas and the trapped mass in the combustion chamber are evaluated as a function of time from resonant frequency information.
Resumo:
How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.
Resumo:
Emergency Health Services (EHS), encompassing hospital-based Emergency Departments (ED) and pre-hospital ambulance services, are a significant and high profile component of Australia’s health care system and congestion of these, evidenced by physical overcrowding and prolonged waiting times, is causing considerable community and professional concern. This concern relates not only to Australia’s capacity to manage daily health emergencies but also the ability to respond to major incidents and disasters. EHS congestion is a result of the combined effects of increased demand for emergency care, increased complexity of acute health care, and blocked access to ongoing care (e.g. inpatient beds). Despite this conceptual understanding there is a lack of robust evidence to explain the factors driving increased demand, or how demand contributes to congestion, and therefore public policy responses have relied upon limited or unsound information. The Emergency Health Services Queensland (EHSQ) research program proposes to determine the factors influencing the growing demand for emergency health care and to establish options for alternative service provision that may safely meet patient’s needs. The EHSQ study is funded by the Australian Research Council (ARC) through its Linkage Program and is supported financially by the Queensland Ambulance Service (QAS). This monograph is part of a suite of publications based on the research findings that examines the existing literature, and current operational context. Literature was sourced using standard search approaches and a range of databases as well as a selection of articles cited in the reviewed literature. Public sources including the Australian Institute of Health and Welfare (AIHW), the Council of Ambulance Authorities (CAA) Annual Reports, Australian Bureau of Statistics (ABS) and Department of Health and Ageing (DoHA) were examined for trend data across Australia.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of Seven published/submitted papers and one poster presentation, of which five have been published and the other two are under review. This project is financially supported by the QUTPRA Grant. The twenty-first century started with the resurrection of lignocellulosic biomass as a potential substitute for petrochemicals. Petrochemicals, which enjoyed the sustainable economic growth during the past century, have begun to reach or have reached their peak. The world energy situation is complicated by political uncertainty and by the environmental impact associated with petrochemical import and usage. In particular, greenhouse gasses and toxic emissions produced by petrochemicals have been implicated as a significant cause of climate changes. Lignocellulosic biomass (e.g. sugarcane biomass and bagasse), which potentially enjoys a more abundant, widely distributed, and cost-effective resource base, can play an indispensible role in the paradigm transition from fossil-based to carbohydrate-based economy. Poly(3-hydroxybutyrate), PHB has attracted much commercial interest as a plastic and biodegradable material because some its physical properties are similar to those of polypropylene (PP), even though the two polymers have quite different chemical structures. PHB exhibits a high degree of crystallinity, has a high melting point of approximately 180°C, and most importantly, unlike PP, PHB is rapidly biodegradable. Two major factors which currently inhibit the widespread use of PHB are its high cost and poor mechanical properties. The production costs of PHB are significantly higher than for plastics produced from petrochemical resources (e.g. PP costs $US1 kg-1, whereas PHB costs $US8 kg-1), and its stiff and brittle nature makes processing difficult and impedes its ability to handle high impact. Lignin, together with cellulose and hemicellulose, are the three main components of every lignocellulosic biomass. It is a natural polymer occurring in the plant cell wall. Lignin, after cellulose, is the most abundant polymer in nature. It is extracted mainly as a by-product in the pulp and paper industry. Although, traditionally lignin is burnt in industry for energy, it has a lot of value-add properties. Lignin, which to date has not been exploited, is an amorphous polymer with hydrophobic behaviour. These make it a good candidate for blending with PHB and technically, blending can be a viable solution for price and reduction and enhance production properties. Theoretically, lignin and PHB affect the physiochemical properties of each other when they become miscible in a composite. A comprehensive study on structural, thermal, rheological and environmental properties of lignin/PHB blends together with neat lignin and PHB is the targeted scope of this thesis. An introduction to this research, including a description of the research problem, a literature review and an account of the research progress linking the research papers is presented in Chapter 1. In this research, lignin was obtained from bagasse through extraction with sodium hydroxide. A novel two-step pH precipitation procedure was used to recover soda lignin with the purity of 96.3 wt% from the black liquor (i.e. the spent sodium hydroxide solution). The precipitation process is presented in Chapter 2. A sequential solvent extraction process was used to fractionate the soda lignin into three fractions. These fractions, together with the soda lignin, were characterised to determine elemental composition, purity, carbohydrate content, molecular weight, and functional group content. The thermal properties of the lignins were also determined. The results are presented and discussed in Chapter 2. On the basis of the type and quantity of functional groups, attempts were made to identify potential applications for each of the individual lignins. As an addendum to the general section on the development of composite materials of lignin, which includes Chapters 1 and 2, studies on the kinetics of bagasse thermal degradation are presented in Appendix 1. The work showed that distinct stages of mass losses depend on residual sucrose. As the development of value-added products from lignin will improve the economics of cellulosic ethanol, a review on lignin applications, which included lignin/PHB composites, is presented in Appendix 2. Chapters 3, 4 and 5 are dedicated to investigations of the properties of soda lignin/PHB composites. Chapter 3 reports on the thermal stability and miscibility of the blends. Although the addition of soda lignin shifts the onset of PHB decomposition to lower temperatures, the lignin/PHB blends are thermally more stable over a wider temperature range. The results from the thermal study also indicated that blends containing up to 40 wt% soda lignin were miscible. The Tg data for these blends fitted nicely to the Gordon-Taylor and Kwei models. Fourier transform infrared spectroscopy (FT-IR) evaluation showed that the miscibility of the blends was because of specific hydrogen bonding (and similar interactions) between reactive phenolic hydroxyl groups of lignin and the carbonyl group of PHB. The thermophysical and rheological properties of soda lignin/PHB blends are presented in Chapter 4. In this chapter, the kinetics of thermal degradation of the blends is studied using thermogravimetric analysis (TGA). This preliminary investigation is limited to the processing temperature of blend manufacturing. Of significance in the study, is the drop in the apparent energy of activation, Ea from 112 kJmol-1 for pure PHB to half that value for blends. This means that the addition of lignin to PHB reduces the thermal stability of PHB, and that the comparative reduced weight loss observed in the TGA data is associated with the slower rate of lignin degradation in the composite. The Tg of PHB, as well as its melting temperature, melting enthalpy, crystallinity and melting point decrease with increase in lignin content. Results from the rheological investigation showed that at low lignin content (.30 wt%), lignin acts as a plasticiser for PHB, while at high lignin content it acts as a filler. Chapter 5 is dedicated to the environmental study of soda lignin/PHB blends. The biodegradability of lignin/PHB blends is compared to that of PHB using the standard soil burial test. To obtain acceptable biodegradation data, samples were buried for 12 months under controlled conditions. Gravimetric analysis, TGA, optical microscopy, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), FT-IR, and X-ray photoelectron spectroscopy (XPS) were used in the study. The results clearly demonstrated that lignin retards the biodegradation of PHB, and that the miscible blends were more resistant to degradation compared to the immiscible blends. To obtain an understanding between the structure of lignin and the properties of the blends, a methanol-soluble lignin, which contains 3× less phenolic hydroxyl group that its parent soda lignin used in preparing blends for the work reported in Chapters 3 and 4, was blended with PHB and the properties of the blends investigated. The results are reported in Chapter 6. At up to 40 wt% methanolsoluble lignin, the experimental data fitted the Gordon-Taylor and Kwei models, similar to the results obtained soda lignin-based blends. However, the values obtained for the interactive parameters for the methanol-soluble lignin blends were slightly lower than the blends obtained with soda lignin indicating weaker association between methanol-soluble lignin and PHB. FT-IR data confirmed that hydrogen bonding is the main interactive force between the reactive functional groups of lignin and the carbonyl group of PHB. In summary, the structural differences existing between the two lignins did not manifest itself in the properties of their blends.
Resumo:
Language Modeling (LM) has been successfully applied to Information Retrieval (IR). However, most of the existing LM approaches only rely on term occurrences in documents, queries and document collections. In traditional unigram based models, terms (or words) are usually considered to be independent. In some recent studies, dependence models have been proposed to incorporate term relationships into LM, so that links can be created between words in the same sentence, and term relationships (e.g. synonymy) can be used to expand the document model. In this study, we further extend this family of dependence models in the following two ways: (1) Term relationships are used to expand query model instead of document model, so that query expansion process can be naturally implemented; (2) We exploit more sophisticated inferential relationships extracted with Information Flow (IF). Information flow relationships are not simply pairwise term relationships as those used in previous studies, but are between a set of terms and another term. They allow for context-dependent query expansion. Our experiments conducted on TREC collections show that we can obtain large and significant improvements with our approach. This study shows that LM is an appropriate framework to implement effective query expansion.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Velocity jump processes are discrete random walk models that have many applications including the study of biological and ecological collective motion. In particular, velocity jump models are often used to represent a type of persistent motion, known as a “run and tumble”, which is exhibited by some isolated bacteria cells. All previous velocity jump processes are non-interacting, which means that crowding effects and agent-to-agent interactions are neglected. By neglecting these agent-to-agent interactions, traditional velocity jump models are only applicable to very dilute systems. Our work is motivated by the fact that many applications in cell biology, such as wound healing, cancer invasion and development, often involve tissues that are densely packed with cells where cell-to-cell contact and crowding effects can be important. To describe these kinds of high cell density problems using a velocity jump process we introduce three different classes of crowding interactions into a one-dimensional model. Simulation data and averaging arguments lead to a suite of continuum descriptions of the interacting velocity jump processes. We show that the resulting systems of hyperbolic partial differential equations predict the mean behavior of the stochastic simulations very well.
Resumo:
In the past, training in clinical psychology in Australia and overseas has been dominated by definitions of input— hours of classes or supervision and of specific components. While prospective practitioners have been required to demonstrate the acquisition of generic competencies, satisfaction of these input driven criteria has been required for both accreditation and registration. Ironically, for a discipline that prides itself on requiring empirical bases for practice and communicating those to students (Calhoun, Moras, Pilkonis, & Rehm, 1998), training criteria have been primarily derived from accepted wisdom, rather than from a sound body of data. The situation has been remarkably like that of a treatment establishing standards of fidelity before its effective components are known—an action our profession has correctly criticised in the past (Herbert & Mueser, 1992).
Resumo:
We propose to use the Tensor Space Modeling (TSM) to represent and analyze the user’s web log data that consists of multiple interests and spans across multiple dimensions. Further we propose to use the decomposition factors of the Tensors for clustering the users based on similarity of search behaviour. Preliminary results show that the proposed method outperforms the traditional Vector Space Model (VSM) based clustering.
Resumo:
Previous research has put forward a number of properties of business process models that have an impact on their understandability. Two such properties are compactness and(block-)structuredness. What has not been sufficiently appreciated at this point is that these desirable properties may be at odds with one another. This paper presents the results of a two-pronged study aimed at exploring the trade-off between compactness and structuredness of process models. The first prong of the study is a comparative analysis of the complexity of a set of unstructured process models from industrial practice and of their corresponding structured versions. The second prong is an experiment wherein a cohort of students was exposed to semantically equivalent unstructured and structured process models. The key finding is that structuredness is not an absolute desideratum vis-a-vis for process model understandability. Instead, subtle trade-offs between structuredness and other model properties are at play.
Resumo:
Evaluating the safety of different traffic facilities is a complex and crucial task. Microscopic simulation models have been widely used for traffic management but have been largely neglected in traffic safety studies. Micro simulation to study safety is more ethical and accessible than the traditional safety studies, which only assess historical crash data. However, current microscopic models are unable to mimic unsafe driver behavior, as they are based on presumptions of safe driver behavior. This highlights the need for a critical examination of the current microscopic models to determine which components and parameters have an effect on safety indicator reproduction. The question then arises whether these safety indicators are valid indicators of traffic safety. The safety indicators were therefore selected and tested for straight motorway segments in Brisbane, Australia. This test examined the capability of a micro-simulation model and presents a better understanding of micro-simulation models and how such models, in particular car following models can be enriched to present more accurate safety indicators.
Resumo:
Non-invasive vibration analysis has been used extensively to monitor the progression of dental implant healing and stabilization. It is now being considered as a method to monitor femoral implants in transfemoral amputees. This paper evaluates two modal analysis excitation methods and investigates their capabilities in detecting changes at the interface between the implant and the bone that occur during osseointegration. Excitation of bone-implant physical models with the electromagnetic shaker provided higher coherence values and a greater number of modes over the same frequency range when compared to the impact hammer. Differences were detected in the natural frequencies and fundamental mode shape of the model when the fit of the implant was altered in the bone. The ability to detect changes in the model dynamic properties demonstrates the potential of modal analysis in this application and warrants further investigation.
Resumo:
With the increasing number of XML documents in varied domains, it has become essential to identify ways of finding interesting information from these documents. Data mining techniques were used to derive this interesting information. Mining on XML documents is impacted by its model due to the semi-structured nature of these documents. Hence, in this chapter we present an overview of the various models of XML documents, how these models were used for mining and some of the issues and challenges in these models. In addition, this chapter also provides some insights into the future models of XML documents for effectively capturing the two important features namely structure and content of XML documents for mining.