973 resultados para mathematics model
Resumo:
Chest clapping, vibration, and shaking were studied in 10 physiotherapists who applied these techniques on an anesthetized animal model. Hemodynamic variables (such as heart rate, blood pressure, pulmonary artery pressure, and right atrial pressure) were measured during the application of these techniques to verify claims of adverse events. In addition, expired tidal volume and peak expiratory flow rate were measured to ascertain effects of these techniques. Physiotherapists in this study applied chest clapping at a rate of 6.2 +/- 0.9 Hz, vibration at 10.5 +/- 2.3 Hz, and shaking at 6.2 +/- 2.3 Hz. With the use of these rates, esophageal pressure swings of 8.8 +/- 5.0, 0.7 +/- 0.3, and 1.4 +/- 0.7 mmHg resulted from clapping, vibration, and shaking respectively. Variability in rates and forces generated by these techniques was 80% of variance in shaking force (P = 0.003). Application of these techniques by physiotherapists was found to have no significant effects on hemodynamic and most ventilatory variables in this study. From this study, we conclude that chest clapping, vibration, and shaking 1) can be consistently performed by physiotherapists; 2) are significantly related to physiotherapists' characteristics, particularly clinical experience; and 3) caused no significant hemodynamic effects.
Resumo:
Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Today, the standard approach for the kinetic analysis of dynamic PET studies is compartment models, in which the tracer and its metabolites are confined to a few well-mixed compartments. We examine whether the standard model is suitable for modern PET data or whether theories including more physiologic realism can advance the interpretation of dynamic PET data. A more detailed microvascular theory is developed for intravascular tracers in single-capillary and multiple-capillary systems. The microvascular models, which account for concentration gradients in capillaries, are validated and compared with the standard model in a pig liver study. Methods: Eight pigs underwent a 5-min dynamic PET study after O-15-carbon monoxide inhalation. Throughout each experiment, hepatic arterial blood and portal venous blood were sampled, and flow was measured with transit-time flow meters. The hepatic dual-inlet concentration was calculated as the flow-weighted inlet concentration. Dynamic PET data were analyzed with a traditional single-compartment model and 2 microvascular models. Results: Microvascular models provided a better fit of the tissue activity of an intravascular tracer than did the compartment model. In particular, the early dynamic phase after a tracer bolus injection was much improved. The regional hepatic blood flow estimates provided by the microvascular models (1.3 +/- 0.3 mL min(-1) mL(-1) for the single-capillary model and 1.14 +/- 0.14 min(-1) mL(-1) for the multiple-capillary model) (mean +/- SEM mL of blood min(-1) mL of liver tissue(-1)) were in agreement with the total blood flow measured by flow meters and normalized to liver weight (1.03 +/- 0.12 mL min(-1) mL(-1)). Conclusion: Compared with the standard compartment model, the 2 microvascular models provide a superior description of tissue activity after an intravascular tracer bolus injection. The microvascular models include only parameters with a clear-cut physiologic interpretation and are applicable to capillary beds in any organ. In this study, the microvascular models were validated for the liver and provided quantitative regional flow estimates in agreement with flow measurements.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
Predictions of flow patterns in a 600-mm scale model SAG mill made using four classes of discrete element method (DEM) models are compared to experimental photographs. The accuracy of the various models is assessed using quantitative data on shoulder, toe and vortex center positions taken from ensembles of both experimental and simulation results. These detailed comparisons reveal the strengths and weaknesses of the various models for simulating mills and allow the effect of different modelling assumptions to be quantitatively evaluated. In particular, very close agreement is demonstrated between the full 3D model (including the end wall effects) and the experiments. It is also demonstrated that the traditional two-dimensional circular particle DEM model under-predicts the shoulder, toe and vortex center positions and the power draw by around 10 degrees. The effect of particle shape and the dimensionality of the model are also assessed, with particle shape predominantly affecting the shoulder position while the dimensionality of the model affects mainly the toe position. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.
Resumo:
A model of iron carbonate (FeCO3) film growth is proposed, which is an extension of the recent mechanistic model of carbon dioxide (CO2) corrosion by Nesic, et al. In the present model, the film growth occurs by precipitation of iron carbonate once saturation is exceeded. The kinetics of precipitation is dependent on temperature and local species concentrations that are calculated by solving the coupled species transport equations. Precipitation tends to build up a layer of FeCO3 on the surface of the steel and reduce the corrosion rate. On the other hand, the corrosion process induces voids under the precipitated film, thus increasing the porosity and leading to a higher corrosion rate. Depending on the environmental parameters such as temperature, pH, CO2 partial pressure, velocity, etc., the balance of the two processes can lead to a variety of outcomes. Very protective films and low corrosion rates are predicted at high pH, temperature, CO2 partial pressure, and Fe2+ ion concentration due to formation of dense protective films as expected. The model has been successfully calibrated against limited experimental data. Parametric testing of the model has been done to gain insight into the effect of various environmental parameters on iron carbonate film formation. The trends shown in the predictions agreed well with the general understanding of the CO2 corrosion process in the presence of iron carbonate films. The present model confirms that the concept of scaling tendency is a good tool for predicting the likelihood of protective iron carbonate film formation.
Resumo:
An energy-based swing hammer mill model has been developed for coke oven feed preparation. it comprises a mechanistic power model to determine the dynamic internal recirculation and a perfect mixing mill model with a dual-classification function to mimic the operations of crusher and screen. The model parameters were calibrated using a pilot-scale swing hammer mill at various operating conditions. The effects of the underscreen configurations and the feed sizes on hammer mill operations were demonstrated through the fitted model parameters. Relationships between the model parameters and the machine configurations were established. The model was validated using the independent experimental data of single lithotype coal tests with the same BJD pilot-scale hammer mill and full operation audit data of an industrial hammer mill. The outcome of the energy-based swing hammer mill model is the capability to simulate the impact of changing blends of coal or mill configurations and operating conditions on product size distribution. Alternatively, the model can be used to select the machine settings required to achieve a desired product. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Viewed on a hydrodynamic scale, flames in experiments are often thin so that they may be described as gasdynamic discontinuities separating the dense cold fresh mixture from the light hot burned products. The original model of a flame as a gasdynamic discontinuity was due to Darrieus and to Landau. In addition to the fluid dynamical equations, the model consists of a flame speed relation describing the evolution of the discontinuity surface, and jump conditions across the surface which relate the fluid variables on the two sides of the surface. The Darrieus-Landau model predicts, in contrast to observations, that a uniformly propagating planar flame is absolutely unstable and that the strength of the instability grows with increasing perturbation wavenumber so that there is no high-wavenumber cutoff of the instability. The model was modified by Markstein to exhibit a high-wavenumber cutoff if a phenomenological constant in the model has an appropriate sign. Both models are postulated, rather than derived from first principles, and both ignore the flame structure, which depends on chemical kinetics and transport processes within the flame. At present, there are two models which have been derived, rather than postulated, and which are valid in two non-overlapping regions of parameter space. Sivashinsky derived a generalization of the Darrieus-Landau model which is valid for Lewis numbers (ratio of thermal diffusivity to mass diffusivity of the deficient reaction component) bounded away from unity. Matalon & Matkowsky derived a model valid for Lewis numbers close to unity. Each model has its own advantages and disadvantages. Under appropriate conditions the Matalon-Matkowsky model exhibits a high-wavenumber cutoff of the Darrieus-Landau instability. However, since the Lewis numbers considered lie too close to unity, the Matalon-Matkowsky model does not capture the pulsating instability. The Sivashinsky model does capture the pulsating instability, but does not exhibit its high-wavenumber cutoff. In this paper, we derive a model consisting of a new flame speed relation and new jump conditions, which is valid for arbitrary Lewis numbers. It captures the pulsating instability and exhibits the high-wavenumber cutoff of all instabilities. The flame speed relation includes the effect of short wavelengths, not previously considered, which leads to stabilizing transverse surface diffusion terms.
Resumo:
An equivalent unit cell waveguide approach (WGA) is described to obtain reflection coefficient phase curves for designing a microstrip patch reflectarray supported by a ground plane with periodic apertures or slots. Based on the presented theory, a computer algorithm for determining the reflection coefficient of a plane wave normally incident on a multi-layer structure of patches and apertures is developed. The validity of the developed algorithm is verified by comparing the obtained results with those published in the literature and the ones generated by Agilent High Frequency Structure Simulator (HFSS). A good agreement in all the presented examples is obtained, proving that the developed theory and computer algorithm can be an effective tool for designing multi-layer microstrip reflectarrays with a periodically perforated ground plane. (C) 2003 Wiley Periodicals, Inc.
Resumo:
For products sold with warranty, the warranty servicing cost can be reduced by improving product reliability through a development process. However, this increases the unit manufacturing cost. Optimal development must achieve a trade-off between these two costs. The outcome of the development process is uncertain and needs to be taken into account in the determination of the optimal development effort. The paper develops a model where this uncertainty is taken into account. (C) 2003 Elsevier Ltd. All rights reserved.