895 resultados para Tilted-time window model
Resumo:
Background Extracorporeal membrane oxygenation (ECMO) circuits have been shown to sequester circulating blood compounds such as drugs based on their physicochemical properties. This study aimed to describe the disposition of macro- and micronutrients in simulated ECMO circuits. Methods Following baseline sampling, known quantities of macro- and micronutrients were injected post oxygenator into ex vivo ECMO circuits primed with the fresh human whole blood and maintained under standard physiologic conditions. Serial blood samples were then obtained at 1, 30 and 60 min and at 6, 12 and 24 h after the addition of nutrients, to measure the concentrations of study compounds using validated assays. Results Twenty-one samples were tested for thirty-one nutrient compounds. There were significant reductions (p < 0.05) in circuit concentrations of some amino acids [alanine (10%), arginine (95%), cysteine (14%), glutamine (25%) and isoleucine (7%)], vitamins [A (42%) and E (6%)] and glucose (42%) over 24 h. Significant increases in circuit concentrations (p < 0.05) were observed over time for many amino acids, zinc and vitamin C. There were no significant reductions in total proteins, triglycerides, total cholesterol, selenium, copper, manganese and vitamin D concentrations within the ECMO circuit over a 24-h period. No clear correlation could be established between physicochemical properties and circuit behaviour of tested nutrients. Conclusions Significant alterations in macro- and micronutrient concentrations were observed in this single-dose ex vivo circuit study. Most significantly, there is potential for circuit loss of essential amino acid isoleucine and lipid soluble vitamins (A and E) in the ECMO circuit, and the mechanisms for this need further exploration. While the reductions in glucose concentrations and an increase in other macro- and micronutrient concentrations probably reflect cellular metabolism and breakdown, the decrement in arginine and glutamine concentrations may be attributed to their enzymatic conversion to ornithine and glutamate, respectively. While the results are generally reassuring from a macronutrient perspective, prospective studies in clinical subjects are indicated to further evaluate the influence of ECMO circuit on micronutrient concentrations and clinical outcomes.
Resumo:
In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.
Resumo:
The reliable response to weak biological signals requires that they be amplified with fidelity. In E. coli, the flagellar motors that control swimming can switch direction in response to very small changes in the concentration of the signaling protein CheY-P, but how this works is not well understood. A recently proposed allosteric model based on cooperative conformational spread in a ring of identical protomers seems promising as it is able to qualitatively reproduce switching, locked state behavior and Hill coefficient values measured for the rotary motor. In this paper we undertook a comprehensive simulation study to analyze the behavior of this model in detail and made predictions on three experimentally observable quantities: switch time distribution, locked state interval distribution, Hill coefficient of the switch response. We parameterized the model using experimental measurements, finding excellent agreement with published data on motor behavior. Analysis of the simulated switching dynamics revealed a mechanism for chemotactic ultrasensitivity, in which cooperativity is indispensable for realizing both coherent switching and effective amplification. These results showed how cells can combine elements of analog and digital control to produce switches that are simultaneously sensitive and reliable. © 2012 Ma et al.
Resumo:
This study started with the aim to develop an approach that will help designers create interfaces that are more intuitive for older adults to use. Two objectives were set for this study: 1) to investigate one of the possible strategies for developing intuitive interfaces for older people, and; 2) to investigate factors that could interfere with intuitive use. This paper briefly presents the outcome of the two experiments and how it has lead to the development of an adaptable interface design model that will help designers develop interfaces that are intuitive to learn and, over time, intuitive to use for users with diverse technology prior experience and cognitive abilities.
Resumo:
Background The benign reputation of Plasmodium vivax is at odds with the burden and severity of the disease. This reputation, combined with restricted in vitro techniques, has slowed efforts to gain an understanding of the parasite biology and interaction with its human host. Methods A simulation model of the within-host dynamics of P. vivax infection is described, incorporating distinctive characteristics of the parasite such as the preferential invasion of reticulocytes and hypnozoite production. The developed model is fitted using digitized time-series’ from historic neurosyphilis studies, and subsequently validated against summary statistics from a larger study of the same population. The Chesson relapse pattern was used to demonstrate the impact of released hypnozoites. Results The typical pattern for dynamics of the parasite population is a rapid exponential increase in the first 10 days, followed by a gradual decline. Gametocyte counts follow a similar trend, but are approximately two orders of magnitude lower. The model predicts that, on average, an infected naïve host in the absence of treatment becomes infectious 7.9 days post patency and is infectious for a mean of 34.4 days. In the absence of treatment, the effect of hypnozoite release was not apparent as newly released parasites were obscured by the existing infection. Conclusions The results from the model provides useful insights into the dynamics of P. vivax infection in human hosts, in particular the timing of host infectiousness and the role of the hypnozoite in perpetuating infection.
Resumo:
This paper presents an extension to the Rapidly-exploring Random Tree (RRT) algorithm applied to autonomous, drifting underwater vehicles. The proposed algorithm is able to plan paths that guarantee convergence in the presence of time-varying ocean dynamics. The method utilizes 4-Dimensional, ocean model prediction data as an evolving basis for expanding the tree from the start location to the goal. The performance of the proposed method is validated through Monte-Carlo simulations. Results illustrate the importance of the temporal variance in path execution, and demonstrate the convergence guarantee of the proposed methods.
Resumo:
The continuous changing impacts appeared in all solution understanding approaches in the projects management field (especially in the construction field of work) by adopting dynamic solution paths. The paper will define what argue to be a better relational model for project management constraints (time, cost, and scope). This new model will increase the success factors of any complex program / project. This is a qualitative research adopting a new avenue of investigation by following different approach of attributing project activities with social phenomena, and supporting phenomenon with field of observations rather than mathematical method by emerging solution from human, and ants' colonies successful practices. The results will show the correct approach of relation between the triple constraints considering the relation as multi agents system having specified communication channels based on agents locations. Information will be transferred between agents, and action would be taken based on constraint agents locations in the project structure allowing immediate changes abilities in order to overcome issues of over budget, behind schedule, and additional scope impact. This is complex adaptive system having self organizes technique, and cybernetic control. Resulted model can be used for improving existing project management methodologies.
Resumo:
Background: This study attempted to develop health risk-based metrics for defining a heatwave in Brisbane, Australia. Methods: Poisson generalised additive model was performed to assess the impact of heatwaves on mortality and emergency hospital admissions (EHAs) in Brisbane. Results: In general, the higher the intensity and the longer the duration of a heatwave, the greater the health impacts. There was no apparent difference in EHAs risk during different periods of a warm season. However, there was a greater risk of mortality in the second half of a warm season than that in the first half. While elderly (>75 years)were particularly vulnerable to both the EHA and mortality effects of a heatwave, the risk for EHAs also significantly increased for two other age groups (0-64 years and 65-74 years) during severe heatwaves. Different patterns between cardiorespiratory mortality and EHAs were observed. Based on these findings, we propose the use of a teiered heat warning system based on the health risk of heatwave. Conclusions: Health risk-based metrics are a useful tool for the development of local heatwave definitions. thsi tool may have significant implications for the assessment of heatwave-related health consequences and development of heatwave response plans and implementation strategies.
An external field prior for the hidden Potts model with application to cone-beam computed tomography
Resumo:
In images with low contrast-to-noise ratio (CNR), the information gain from the observed pixel values can be insufficient to distinguish foreground objects. A Bayesian approach to this problem is to incorporate prior information about the objects into a statistical model. A method for representing spatial prior information as an external field in a hidden Potts model is introduced. This prior distribution over the latent pixel labels is a mixture of Gaussian fields, centred on the positions of the objects at a previous point in time. It is particularly applicable in longitudinal imaging studies, where the manual segmentation of one image can be used as a prior for automatic segmentation of subsequent images. The method is demonstrated by application to cone-beam computed tomography (CT), an imaging modality that exhibits distortions in pixel values due to X-ray scatter. The external field prior results in a substantial improvement in segmentation accuracy, reducing the mean pixel misclassification rate for an electron density phantom from 87% to 6%. The method is also applied to radiotherapy patient data, demonstrating how to derive the external field prior in a clinical context.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
The fractional Fokker-Planck equation is an important physical model for simulating anomalous diffusions with external forces. Because of the non-local property of the fractional derivative an interesting problem is to explore high accuracy numerical methods for fractional differential equations. In this paper, a space-time spectral method is presented for the numerical solution of the time fractional Fokker-Planck initial-boundary value problem. The proposed method employs the Jacobi polynomials for the temporal discretization and Fourier-like basis functions for the spatial discretization. Due to the diagonalizable trait of the Fourier-like basis functions, this leads to a reduced representation of the inner product in the Galerkin analysis. We prove that the time fractional Fokker-Planck equation attains the same approximation order as the time fractional diffusion equation developed in [23] by using the present method. That indicates an exponential decay may be achieved if the exact solution is sufficiently smooth. Finally, some numerical results are given to demonstrate the high order accuracy and efficiency of the new numerical scheme. The results show that the errors of the numerical solutions obtained by the space-time spectral method decay exponentially.
Resumo:
Recent changes in the aviation industry and in the expectations of travellers have begun to alter the way we approach our understanding, and thus the segmentation, of airport passengers. The key to successful segmentation of any population lies in the selection of the criteria on which the partitions are based. Increasingly, the basic criteria used to segment passengers (purpose of trip and frequency of travel) no longer provide adequate insights into the passenger experience. In this paper, we propose a new model for passenger segmentation based on the passenger core value, time. The results are based on qualitative research conducted in-situ at Brisbane International Terminal during 2012-2013. Based on our research, a relationship between time sensitivity and degree of passenger engagement was identified. This relationship was used as the basis for a new passenger segmentation model, namely: Airport Enthusiast (engaged, non time sensitive); Time Filler (non engaged, non time sensitive); Efficiency Lover (non engaged, time sensitive) and Efficient Enthusiast (engaged, time sensitive). The outcomes of this research extend the theoretical knowledge about passenger experience in the terminal environment. These new insights can ultimately be used to optimise the allocation of space for future terminal planning and design.
Resumo:
The development of methods for real-time crash prediction as a function of current or recent traffic and roadway conditions is gaining increasing attention in the literature. Numerous studies have modeled the relationships between traffic characteristics and crash occurrence, and significant progress has been made. Given the accumulated evidence on this topic and the lack of an articulate summary of research status, challenges, and opportunities, there is an urgent need to scientifically review these studies and to synthesize the existing state-of-the-art knowledge. This paper addresses this need by undertaking a systematic literature review to identify current knowledge, challenges, and opportunities, and then conducts a meta-analysis of existing studies to provide a summary impact of traffic characteristics on crash occurrence. Sensitivity analyses were conducted to assess quality, publication bias, and outlier bias of the various studies; and the time intervals used to measure traffic characteristics were also considered. As a result of this comprehensive and systematic review, issues in study designs, traffic and crash data, and model development and validation are discussed. Outcomes of this study are intended to provide researchers focused on real-time crash prediction with greater insight into the modeling of this important but extremely challenging safety issue.
Resumo:
Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.