60 resultados para pattern-mixture model
Resumo:
The purpose of this study, was to develop a newborn piglet model of hypoxia/ischaemia which would better emulate the clinical situation in the asphyxiated human neonate and produce a consistent degree of histopathological injury following the insult. One-day-old piglets (n = 18) were anaesthetised with a mixture of propofol (10 mg/kg/h) and alfentinal (5,5.5 mug/kg/h) i.v. The piglets were intubated and ventilated. Physiological variables were monitored continuously. Hypoxia was induced by decreasing the inspired oxygen (FiO(2)) to 3-4% and adjusting FiO(2) to maintain the cerebral function monitor peak amplitude at less than or equal to5 muV. The duration of the mild insult was 20, min while the severe insult was 30 min which included 10 min where the blood pressure was allowed to fall below 70% of baseline. Control piglets (n=4 of 18) were subjected to the same protocol except for the hypoxic/ischaemic insult. The piglets were allowed to recover from anaesthesia then euthanased 72 It after the insult. The brains were perfusion-fixed, removed and embedded in paraffin. Coronal sections were stained by haematoxylin/eosin. A blinded observer examined the frontal and parietal cortex, hippocampus, basal ganglia, thalamus and cerebellum for the degree of damage. The total mean histology score for the five areas of the brain for the severe insult was 15.6 +/-4.4 (mean +/-S.D., n=7), whereas no damage was seen in either the mild insult (n=4) or control groups. This 'severe damage' model produces a consistent level of damage and will prove useful for examining potential neuroprotective therapies in the neonatal brain. (C) 2001 Elsevier Science BY. All rights reserved.
Resumo:
The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Systeme Pour I'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-cast Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.
Resumo:
Viewed on a hydrodynamic scale, flames in experiments are often thin so that they may be described as gasdynamic discontinuities separating the dense cold fresh mixture from the light hot burned products. The original model of a flame as a gasdynamic discontinuity was due to Darrieus and to Landau. In addition to the fluid dynamical equations, the model consists of a flame speed relation describing the evolution of the discontinuity surface, and jump conditions across the surface which relate the fluid variables on the two sides of the surface. The Darrieus-Landau model predicts, in contrast to observations, that a uniformly propagating planar flame is absolutely unstable and that the strength of the instability grows with increasing perturbation wavenumber so that there is no high-wavenumber cutoff of the instability. The model was modified by Markstein to exhibit a high-wavenumber cutoff if a phenomenological constant in the model has an appropriate sign. Both models are postulated, rather than derived from first principles, and both ignore the flame structure, which depends on chemical kinetics and transport processes within the flame. At present, there are two models which have been derived, rather than postulated, and which are valid in two non-overlapping regions of parameter space. Sivashinsky derived a generalization of the Darrieus-Landau model which is valid for Lewis numbers (ratio of thermal diffusivity to mass diffusivity of the deficient reaction component) bounded away from unity. Matalon & Matkowsky derived a model valid for Lewis numbers close to unity. Each model has its own advantages and disadvantages. Under appropriate conditions the Matalon-Matkowsky model exhibits a high-wavenumber cutoff of the Darrieus-Landau instability. However, since the Lewis numbers considered lie too close to unity, the Matalon-Matkowsky model does not capture the pulsating instability. The Sivashinsky model does capture the pulsating instability, but does not exhibit its high-wavenumber cutoff. In this paper, we derive a model consisting of a new flame speed relation and new jump conditions, which is valid for arbitrary Lewis numbers. It captures the pulsating instability and exhibits the high-wavenumber cutoff of all instabilities. The flame speed relation includes the effect of short wavelengths, not previously considered, which leads to stabilizing transverse surface diffusion terms.
Resumo:
The kinetics of chain reactions of octanedithiol with styrene, thermally initiated with TX29B50 (a 50:50 wt% solution of TX29 diperoxy initiator in a phthalate plasticizer), have been studied over a range of initiator concentrations, a range of mixture formulations and a range of temperatures. This system has been investigated as a model system for the reactions of polyfunctional thiols with divinyl benzene. The reactions have been shown to follow first-order kinetics for both the thiol and the ene species and to be characterized by a dependence on the initiator concentration to the power of one half. The kinetic rate parameters have been shown to adhere to Arrhenius behaviour. A kinetic model for the chain reactions for this system has been proposed. (C) 2003 Society of Chemical Industry.
Resumo:
The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.
Resumo:
A hydrogel intervertebral disc (lVD) model consisting of an inner nucleus core and an outer anulus ring was manufactured from 30 and 35% by weight Poly(vinyl alcohol) hydrogel (PVA-H) concentrations and subjected to axial compression in between saturated porous endplates at 200 N for 11 h, 30 min. Repeat experiments (n = 4) on different samples (N = 2) show good reproducibility of fluid loss and axial deformation. An axisymmetric nonlinear poroelastic finite element model with variable permeability was developed using commercial finite element software to compare axial deformation and predicted fluid loss with experimental data. The FE predictions indicate differential fluid loss similar to that of biological IVDs, with the nucleus losing more water than the anulus, and there is overall good agreement between experimental and finite element predicted fluid loss. The stress distribution pattern indicates important similarities with the biological lVD that includes stress transference from the nucleus to the anulus upon sustained loading and renders it suitable as a model that can be used in future studies to better understand the role of fluid and stress in biological IVDs. (C) 2005 Springer Science + Business Media, Inc.
Resumo:
Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
Computer modelling promises to. be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The 'spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/- 50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations. (c) 2006 Published by Elsevier B.V.
Resumo:
Molecular dynamics simulations have been used to study the phase behavior of a dipalmitoylphosphatidylcholine (DPPC)/palmitic acid (PA)/water 1:2:20 mixture in atomic detail. Starting from a random solution of DPPC and PA in water, the system adopts either a gel phase at temperatures below similar to 330 K or an inverted hexagonal phase above similar to 330 K in good agreement with experiment. It has also been possible to observe the direct transformation from a gel to an inverted hexagonal phase at elevated temperature (similar to 390 K). During this transformation, a metastable fluid lamellar intermediate is observed. Interlamellar connections or stalks form spontaneously on a nanosecond time scale and subsequently elongate, leading to the formation of an inverted hexagonal phase. This work opens the possibility of studying in detail how the formation of nonlamellar phases is affected by lipid composition and (fusion) peptides and, thus, is an important step toward understanding related biological processes, such as membrane fusion.
Resumo:
We have developed an alignment-free method that calculates phylogenetic distances using a maximum-likelihood approach for a model of sequence change on patterns that are discovered in unaligned sequences. To evaluate the phylogenetic accuracy of our method, and to conduct a comprehensive comparison of existing alignment-free methods (freely available as Python package decaf+py at http://www.bioinformatics.org.au), we have created a data set of reference trees covering a wide range of phylogenetic distances. Amino acid sequences were evolved along the trees and input to the tested methods; from their calculated distances we infered trees whose topologies we compared to the reference trees. We find our pattern-based method statistically superior to all other tested alignment-free methods. We also demonstrate the general advantage of alignment-free methods over an approach based on automated alignments when sequences violate the assumption of collinearity. Similarly, we compare methods on empirical data from an existing alignment benchmark set that we used to derive reference distances and trees. Our pattern-based approach yields distances that show a linear relationship to reference distances over a substantially longer range than other alignment-free methods. The pattern-based approach outperforms alignment-free methods and its phylogenetic accuracy is statistically indistinguishable from alignment-based distances.
Resumo:
Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.
Resumo:
This paper presents a formal but practical approach for defining and using design patterns. Initially we formalize the concepts commonly used in defining design patterns using Object-Z. We also formalize consistency constraints that must be satisfied when a pattern is deployed in a design model. Then we implement the pattern modeling language and its consistency constraints using an existing modeling framework, EMF, and incorporate the implementation as plug-ins to the Eclipse modeling environment. While the language is defined formally in terms of Object-Z definitions, the language is implemented in a practical environment. Using the plug-ins, users can develop precise pattern descriptions without knowing the underlying formalism, and can use the tool to check the validity of the pattern descriptions and pattern usage in design models. In this work, formalism brings precision to the pattern language definition and its implementation brings practicability to our pattern-based modeling approach.
Resumo:
Pattern discovery in temporal event sequences is of great importance in many application domains, such as telecommunication network fault analysis. In reality, not every type of event has an accurate timestamp. Some of them, defined as inaccurate events may only have an interval as possible time of occurrence. The existence of inaccurate events may cause uncertainty in event ordering. The traditional support model cannot deal with this uncertainty, which would cause some interesting patterns to be missing. A new concept, precise support, is introduced to evaluate the probability of a pattern contained in a sequence. Based on this new metric, we define the uncertainty model and present an algorithm to discover interesting patterns in the sequence database that has one type of inaccurate event. In our model, the number of types of inaccurate events can be extended to k readily, however, at a cost of increasing computational complexity.