317 resultados para Process models
Resumo:
In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.
Resumo:
The determinants and key mechanisms of cancer cell osteotropism have not been identified, mainly due to the lack of reproducible animal models representing the biological, genetic and clinical features seen in humans. An ideal model should be capable of recapitulating as many steps of the metastatic cascade as possible, thus facilitating the development of prognostic markers and novel therapeutic strategies. Most animal models of bone metastasis still have to be derived experimentally as most syngeneic and transgeneic approaches do not provide a robust skeletal phenotype and do not recapitulate the biological processes seen in humans. The xenotransplantation of human cancer cells or tumour tissue into immunocompromised murine hosts provides the possibility to simulate early and late stages of the human disease. Human bone or tissue-engineered human bone constructs can be implanted into the animal to recapitulate more subtle, species-specific aspects of the mutual interaction between human cancer cells and the human bone microenvironment. Moreover, the replication of the entire "organ" bone makes it possible to analyse the interaction between cancer cells and the haematopoietic niche and to confer at least a partial human immunity to the murine host. This process of humanisation is facilitated by novel immunocompromised mouse strains that allow a high engraftment rate of human cells or tissue. These humanised xenograft models provide an important research tool to study human biological processes of bone metastasis.
Resumo:
This thesis reports on an investigation to develop an advanced and comprehensive milling process model of the raw sugar factory. Although the new model can be applied to both, the four-roller and six-roller milling units, it is primarily developed for the six-roller mills which are widely used in the Australian sugar industry. The approach taken was to gain an understanding of the previous milling process simulation model "MILSIM" developed at the University of Queensland nearly four decades ago. Although the MILSIM model was widely adopted in the Australian sugar industry for simulating the milling process it did have some incorrect assumptions. The study aimed to eliminate all the incorrect assumptions of the previous model and develop an advanced model that represents the milling process correctly and tracks the flow of other cane components in the milling process which have not been considered in the previous models. The development of the milling process model was done is three stages. Firstly, an enhanced milling unit extraction model (MILEX) was developed to access the mill performance parameters and predict the extraction performance of the milling process. New definitions for the milling performance parameters were developed and a complete milling train along with the juice screen was modelled. The MILEX model was validated with factory data and the variation in the mill performance parameters was observed and studied. Some case studies were undertaken to study the effect of fibre in juice streams, juice in cush return and imbibition% fibre on extraction performance of the milling process. It was concluded from the study that the empirical relations developed for the mill performance parameters in the MILSIM model were not applicable to the new model. New empirical relations have to be developed before the model is applied with confidence. Secondly, a soluble and insoluble solids model was developed using modelling theory and experimental data to track the flow of sucrose (pol), reducing sugars (glucose and fructose), soluble ash, true fibre and mud solids entering the milling train through the cane supply and their distribution in juice and bagasse streams.. The soluble impurities and mud solids in cane affect the performance of the milling train and further processing of juice and bagasse. New mill performance parameters were developed in the model to track the flow of cane components. The developed model is the first of its kind and provides some additional insight regarding the flow of soluble and insoluble cane components and the factors affecting their distribution in juice and bagasse. The model proved to be a good extension to the MILEX model to study the overall performance of the milling train. Thirdly, the developed models were incorporated in a proprietary software package "SysCAD’ for advanced operational efficiency and for availability in the ‘whole of factory’ model. The MILEX model was developed in SysCAD software to represent a single milling unit. Eventually the entire milling train and the juice screen were developed in SysCAD using series of different controllers and features of the software. The models developed in SysCAD can be run from macro enabled excel file and reports can be generated in excel sheets. The flexibility of the software, ease of use and other advantages are described broadly in the relevant chapter. The MILEX model is developed in static mode and dynamic mode. The application of the dynamic mode of the model is still under progress.
Resumo:
This book presents readers with the opportunity to fundamentally re-evaluate the processes of innovation and entrepreneurship, and to rethink how they might best be stimulated and fostered within our organizations and communities. The fundamental thesis of the book is that the entrepreneurial process is not a linear progression from novel idea to successful innovation, but is an iterative series of experiments, where progress depends on the persistence and resilience of the individuals involved, and their ability and to learn from failure as well as success. From this premise, the authors argue that the ideal environment for new venture creation is a form of “experimental laboratory,” a community of innovators where ideas are generated, shared, and refined; experiments are encouraged; and which in itself serves as a test environment for those ideas and experiments. This environment is quite different from the traditional “incubator,” which may impose the disciplines of the established firm too early in the development of the new venture.
Resumo:
A range of authors from the risk management, crisis management, and crisis communications literature have proposed different models as a means of understanding components of crisis. A generic component of these sources has focused on preparedness practices before disturbance events and response practices during events. This paper provides a critical analysis of three key explanatory models of how crises escalate highlighting the strengths and limitations of each approach. The paper introduces an optimised conceptual model utilising components from the previous work under the four phases of pre-event, response, recovery, and post-event. Within these four phases, a ten step process is introduced that can enhance understanding of the progression of distinct stages of disturbance for different types of events. This crisis evolution framework is examined as a means to provide clarity and applicability to a range of infrastructure failure contexts and provide a path for further empirical investigation in this area.
Resumo:
Modelling business processes for analysis or redesign usually requires the collaboration of many stakeholders. These stakeholders may be spread across locations or even companies, making co-located collaboration costly and difficult to organize. Modern process modelling technologies support remote collaboration but lack support for visual cues used in co-located collaboration. Previously we presented a prototype 3D virtual world process modelling tool that supports a number of visual cues to facilitate remote collaborative process model creation and validation. However, the added complexity of having to navigate a virtual environment and using an avatar for communication made the tool difficult to use for novice users. We now present an evolved version of the technology that addresses these issues by providing natural user interfaces for non-verbal communication, navigation and model manipulation.
Resumo:
Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods and models for business process design do not yet consider this context. In this paper, we describe our work on the design of a method framework and appropriate models to enable a context-aware process design approach. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.
Resumo:
Mathematical models of mosquito-borne pathogen transmission originated in the early twentieth century to provide insights into how to most effectively combat malaria. The foundations of the Ross–Macdonald theory were established by 1970. Since then, there has been a growing interest in reducing the public health burden of mosquito-borne pathogens and an expanding use of models to guide their control. To assess how theory has changed to confront evolving public health challenges, we compiled a bibliography of 325 publications from 1970 through 2010 that included at least one mathematical model of mosquito-borne pathogen transmission and then used a 79-part questionnaire to classify each of 388 associated models according to its biological assumptions. As a composite measure to interpret the multidimensional results of our survey, we assigned a numerical value to each model that measured its similarity to 15 core assumptions of the Ross–Macdonald model. Although the analysis illustrated a growing acknowledgement of geographical, ecological and epidemiological complexities in modelling transmission, most models during the past 40 years closely resemble the Ross–Macdonald model. Modern theory would benefit from an expansion around the concepts of heterogeneous mosquito biting, poorly mixed mosquito-host encounters, spatial heterogeneity and temporal variation in the transmission process.
Resumo:
This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.
Resumo:
In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modelling interactions between such species, we often make use of the mean field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean field approximation is only used in appropriate settings. In circumstances where the mean field approximation is unsuitable we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper we provide a method that overcomes many of the failures of the mean field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multi-species case, and show results specific to a two-species problem. We compare averaged discrete results to both the mean field approximation and our improved method which incorporates spatial correlations. We note that the mean field approximation fails dramatically in some cases, predicting very different behaviour from that seen upon averaging multiple realisations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behaviour in all cases, thus providing a more reliable modelling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.
Resumo:
The central thesis in the article is that the venture creation process is different for innovative versus imitative ventures. This holds up; the pace of the process differs by type of venture as do, in line with theory-based hypotheses, the effects of certain human capital (HC) and social capital (SC) predictors. Importantly, and somewhat unexpectedly, the theoretically derived models using HC, SC, and certain controls are relatively successful explaining progress in the creation process for the minority of innovative ventures, but achieve very limited success for the imitative majority. This may be due to a rationalistic bias in conventional theorizing and suggests that there is need for considerable theoretical development regarding the important phenomenon of new venture creation processes. Another important result is that the building up of instrumental social capital, which we assess comprehensively and as a time variant construct, is important for making progress with both types of ventures, and increasingly, so as the process progresses. This result corroborates with stronger operationalization and more appropriate analysis method what previously published research has only been able to hint at.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Business process analysis and process mining, particularly within the health care domain, remain under-utilised. Applied research that employs such techniques to routinely collected, health care data enables stakeholders to empirically investigate care as it is delivered by different health providers. However, cross-organisational mining and the comparative analysis of processes present a set of unique challenges in terms of ensuring population and activity comparability, visualising the mined models and interpreting the results. Without addressing these issues, health providers will find it difficult to use process mining insights, and the potential benefits of evidence-based process improvement within health will remain unrealised. In this paper, we present a brief introduction on the nature of health care processes; a review of the process mining in health literature; and a case study conducted to explore and learn how health care data, and cross-organisational comparisons with process mining techniques may be approached. The case study applies process mining techniques to administrative and clinical data for patients who present with chest pain symptoms at one of four public hospitals in South Australia. We demonstrate an approach that provides detailed insights into clinical (quality of patient health) and fiscal (hospital budget) pressures in health care practice. We conclude by discussing the key lessons learned from our experience in conducting business process analysis and process mining based on the data from four different hospitals.
Resumo:
A critical requirement for safe autonomous navigation of a planetary rover is the ability to accurately estimate the traversability of the terrain. This work considers the problem of predicting the attitude and configuration angles of the platform from terrain representations that are often incomplete due to occlusions and sensor limitations. Using Gaussian Processes (GP) and exteroceptive data as training input, we can provide a continuous and complete representation of terrain traversability, with uncertainty in the output estimates. In this paper, we propose a novel method that focuses on exploiting the explicit correlation in vehicle attitude and configuration during operation by learning a kernel function from vehicle experience to perform GP regression. We provide an extensive experimental validation of the proposed method on a planetary rover. We show significant improvement in the accuracy of our estimation compared with results obtained using standard kernels (Squared Exponential and Neural Network), and compared to traversability estimation made over terrain models built using state-of-the-art GP techniques.
Resumo:
A typology of music distribution models is proposed consisting of the ownership model, the access model, and the context model. These models are not substitutes for each other and may co‐exist serving different market niches. The paper argues that increasingly the economic value created from recorded music is based on con‐text rather than on ownership. During this process, access‐based services temporarily generate economic value, but such services are destined to eventually become commoditised.