979 resultados para mixture model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of interfaces for sample introduction from high pressures is important for real-time online hyphenation of chromatographic and other separation devices with mass spectrometry (MS) or accelerator mass spectrometry (AMS). Momentum separators can reduce unwanted low-density gases and introduce the analyte into the vacuum. In this work, the axial jet separator, a new momentum interface, is characterized by theory and empirical optimization. The mathematical model describes the different axial penetration of the components of a jetgas mixture and explains the empirical results for injections of CO2 in helium into MS and AMS instruments. We show that the performance of the new interface is sensitive to the nozzle size, showing good qualitative agreement with the mathematical model. Smaller nozzle sizes are more preferable due to their higher inflow capacity. The CO2 transmission efficiency of the interface into a MS instrument is ~14% (CO2/helium separation factor of 2.7). The interface receives and delivers flows of ~17.5 mL/min and ~0.9 mL/min, respectively. For the interfaced AMS instrument, the ionization and overall efficiencies are 0.7-3% and 0.1-0.4%, respectively, for CO2 amounts of 4-0.6 µg C, which is only slightly lower compared to conventional systems using intermediate trapping. The ionization efficiency depends on to the carbon mass flow in the injected pulse and is suppressed at high CO2 flows. Relative to a conventional jet separator, the transmission efficiency of the axial jet separator is lower, but its performance is less sensitive to misalignments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The noble gas xenon is considered as a neuroprotective agent, but availability of the gas is limited. Studies on neuroprotection with the abundant noble gases helium and argon demonstrated mixed results, and data regarding neuroprotection after cardiac arrest are scant. We tested the hypothesis that administration of 50% helium or 50% argon for 24 h after resuscitation from cardiac arrest improves clinical and histological outcome in our 8 min rat cardiac arrest model. METHODS Forty animals had cardiac arrest induced with intravenous potassium/esmolol and were randomized to post-resuscitation ventilation with either helium/oxygen, argon/oxygen or air/oxygen for 24 h. Eight additional animals without cardiac arrest served as reference, these animals were not randomized and not included into the statistical analysis. Primary outcome was assessment of neuronal damage in histology of the region I of hippocampus proper (CA1) from those animals surviving until day 5. Secondary outcome was evaluation of neurobehavior by daily testing of a Neurodeficit Score (NDS), the Tape Removal Test (TRT), a simple vertical pole test (VPT) and the Open Field Test (OFT). Because of the non-parametric distribution of the data, the histological assessments were compared with the Kruskal-Wallis test. Treatment effect in repeated measured assessments was estimated with a linear regression with clustered robust standard errors (SE), where normality is less important. RESULTS Twenty-nine out of 40 rats survived until day 5 with significant initial deficits in neurobehavioral, but rapid improvement within all groups randomized to cardiac arrest. There were no statistical significant differences between groups neither in the histological nor in neurobehavioral assessment. CONCLUSIONS The replacement of air with either helium or argon in a 50:50 air/oxygen mixture for 24 h did not improve histological or clinical outcome in rats subjected to 8 min of cardiac arrest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation examined body mass index (BMI) growth trajectories and the effects of gender, ethnicity, dietary intake, and physical activity (PA) on BMI growth trajectories among 3rd to 12th graders (9-18 years of age). Growth curve model analysis was performed using data from The Child and Adolescent Trial for Cardiovascular Health (CATCH) study. The study population included 2909 students who were followed up from grades 3-12. The main outcome was BMI at grades 3, 4, 5, 8, and 12. ^ The results revealed that BMI growth differed across two distinct developmental periods of childhood and adolescence. Rate of BMI growth was faster in middle childhood (9-11 years old or 3rd - 5th grades) than in adolescence (11-18 years old or 5th - 12th grades). Students with higher BMI at 3rd grade (baseline) had faster rates of BMI growth. Three groups of students with distinct BMI growth trajectories were identified: high, average, and low. ^ Black and Hispanic children were more likely to be in the groups with higher baseline BMI and faster rates of BMI growth over time. The effects of gender or ethnicity on BMI growth differed across the three groups. The effects of ethnicity on BMI growth were weakened as the children aged. The effects of gender on BMI growth were attenuated in the groups with a large proportion of black and Hispanic children, i.e., “high” or “average” BMI trajectory group. After controlling for gender, ethnicity, and age at baseline, in the “high BMI trajectory”, rate of yearly BMI growth in middle childhood increased 0.102 for every 500 Kcals increase (p=0.049). No significant effects of percentage of energy from total fat and saturated fat on BMI growth were found. Baseline BMI increased 0.041 for every 30 minutes increased in moderate-to-vigorous PA (MVPA) in the “low BMI trajectory”, while Baseline BMI decreased 0.345 for every 30 minutes increased in vigorous PA (VPA) in the “high BMI trajectory”. ^ Childhood overweight and obesity interventions should start at the earliest possible ages, prior to 3rd grade and continue through grade school. Interventions should focus on all children, but specifically black and Hispanic children, who are more likely to be highest at-risk. Promoting VPA earlier in childhood is important for preventing overweight and obesity among children and adolescents. Interventions should target total energy intake, rather than only percentage of energy from total fat or saturated fat. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate changes in the delivery and oceanic transport of Amazon sediments related to terrestrial climate variations over the last 250 ka. We present high-resolution geochemical records from four marine sediment cores located between 5 and 12° N along the northern South American margin. The Amazon River is the sole source of terrigenous material for sites at 5 and 9° N, while the core at 12° N receives a mixture of Amazon and Orinoco detrital particles. Using an endmember unmixing model, we estimated the relative proportions of Amazon Andean material ("%-Andes", at 5 and 9° N) and of Amazon material ("%-Amazon", at 12° N) within the terrigenous fraction. The %-Andes and %-Amazon records exhibit significant precessional variations over the last 250 ka that are more pronounced during interglacials in comparison to glacial periods. High %-Andes values observed during periods of high austral summer insolation reflect the increased delivery of suspended sediments by Andean tributaries and enhanced Amazonian precipitation, in agreement with western Amazonian speleothem records. Increased Amazonian rainfall reflects the intensification of the South American monsoon in response to enhanced land-ocean thermal gradient and moisture convergence. However, low %-Amazon values obtained at 12° N during the same periods seem to contradict the increased delivery of Amazon sediments. We propose that reorganizations in surface ocean currents modulate the northwestward transport of Amazon material. In agreement with published records, the seasonal North Brazil Current retroflection is intensified (or prolonged in duration) during cold substages of the last 250 ka (which correspond to intervals of high DJF or low JJA insolation) and deflects eastward the Amazon sediment and freshwater plume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Upwardpropagation of a premixed flame in averticaltubefilled with a very leanmixture is simulated numerically using a single irreversible Arrhenius reaction model with infinitely high activation energy. In the absence of heat losses and preferential diffusion effects, a curved flame with stationary shape and velocity close to those of an open bubble ascending in the same tube is found for values of the fuel mass fraction above a certain minimum that increases with the radius of the tube, while the numerical computations cease to converge to a stationary solution below this minimum mass fraction. The vortical flow of the gas behind the flame and in its transport region is described for tubes of different radii. It is argued that this flow may become unstable when the fuel mass fraction is decreased, and that this instability, together with the flame stretch due to the strong curvature of the flame tip in narrow tubes, may be responsible for the minimum fuel mass fraction. Radiation losses and a Lewis number of the fuel slightly above unity decrease the final combustion temperature at the flame tip and increase the minimum fuel mass fraction, while a Lewis number slightly below unity has the opposite effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surfacefilm of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological nonequilibrium thermodynamics for a general nonisothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años la externalización de TI ha ganado mucha importancia en el mercado y, por ejemplo, el mercado externalización de servicios de TI sigue creciendo cada año. Ahora más que nunca, las organizaciones son cada vez más los compradores de las capacidades necesarias mediante la obtención de productos y servicios de los proveedores, desarrollando cada vez menos estas capacidades dentro de la empresa. La selección de proveedores de TI es un problema de decisión complejo. Los gerentes que enfrentan una decisión sobre la selección de proveedores de TI tienen dificultades en la elaboración de lo que hay que pensar, además en sus discursos. También de acuerdo con un estudio del SEI (Software Engineering Institute) [40], del 20 al 25 por ciento de los grandes proyectos de adquisición de TI fracasan en dos años y el 50 por ciento fracasan dentro de cinco años. La mala gestión, la mala definición de requisitos, la falta de evaluaciones exhaustivas, que pueden ser utilizadas para llegar a los mejores candidatos para la contratación externa, la selección de proveedores y los procesos de contratación inadecuados, la insuficiencia de procedimientos de selección tecnológicos, y los cambios de requisitos no controlados son factores que contribuyen al fracaso del proyecto. La mayoría de los fracasos podrían evitarse si el cliente aprendiese a comprender los problemas de decisión, hacer un mejor análisis de decisiones, y el buen juicio. El objetivo principal de este trabajo es el desarrollo de un modelo de decisión para la selección de proveedores de TI que tratará de reducir la cantidad de fracasos observados en las relaciones entre el cliente y el proveedor. La mayor parte de estos fracasos son causados por una mala selección, por parte del cliente, del proveedor. Además de estos problemas mostrados anteriormente, la motivación para crear este trabajo es la inexistencia de cualquier modelo de decisión basado en un multi modelo (mezcla de modelos adquisición y métodos de decisión) para el problema de la selección de proveedores de TI. En el caso de estudio, nueve empresas españolas fueron analizadas de acuerdo con el modelo de decisión para la selección de proveedores de TI desarrollado en este trabajo. Dos softwares se utilizaron en este estudio de caso: Expert Choice, y D-Sight. ABSTRACT In the past few years IT outsourcing has gained a lot of importance in the market and, for example, the IT services outsourcing market is still growing every year. Now more than ever, organizations are increasingly becoming acquirers of needed capabilities by obtaining products and services from suppliers and developing less and less of these capabilities in-house. IT supplier selection is a complex and opaque decision problem. Managers facing a decision about IT supplier selection have difficulty in framing what needs to be thought about further in their discourses. Also according to a study from SEI (Software Engineering Institute) [40], 20 to 25 percent of large information technology (IT) acquisition projects fail within two years and 50 percent fail within five years. Mismanagement, poor requirements definition, lack of comprehensive evaluations, which can be used to come up with the best candidates for outsourcing, inadequate supplier selection and contracting processes, insufficient technology selection procedures, and uncontrolled requirements changes are factors that contribute to project failure. The majority of project failures could be avoided if the acquirer learns how to understand the decision problems, make better decision analysis, and good judgment. The main objective of this work is the development of a decision model for IT supplier selection that will try to decrease the amount of failures seen in the relationships between the client-supplier. Most of these failures are caused by a not well selection of the supplier. Besides these problems showed above, the motivation to create this work is the inexistence of any decision model based on multi model (mixture of acquisition models and decision methods) for the problem of IT supplier selection. In the case study, nine different Spanish companies were analyzed based on the IT supplier selection decision model developed in this work. Two software products were used in this case study, Expert Choice and D-Sight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report here that a cancer gene therapy protocol using a combination of IL-12, pro-IL-18, and IL-1β converting enzyme (ICE) cDNA expression vectors simultaneously delivered via gene gun can significantly augment antitumor effects, evidently by generating increased levels of bioactive IL-18 and consequently IFN-γ. First, we compared the levels of IFN-γ secreted by mouse splenocytes stimulated with tumor cells transfected with various test genes, including IL-12 alone; pro-IL-18 alone; pro-IL-18 and ICE; IL-12 and pro-IL-18; and IL-12, pro-IL-18, and ICE. Among these treatments, the combination of IL-12, pro-IL-18, and ICE cDNA resulted in the highest level of IFN-γ production from splenocytes in vitro, and similar results were obtained when these same treatments were delivered to the skin of a mouse by gene gun and IFN-γ levels were measured at the skin transfection site in vivo. Furthermore, the triple gene combinatorial gene therapy protocol was the most effective among all tested groups at suppressing the growth of TS/A (murine mammary adenocarcinoma) tumors previously implanted intradermally at the skin site receiving DNA transfer by gene gun on days 6, 8, 10, and 12 after tumor implantation. Fifty percent of mice treated with the combined three-gene protocol underwent complete tumor regression. In vivo depletion experiments showed that this antitumor effect was CD8+ T cell-mediated and partially IFN-γ-dependent. These results suggest that a combinatorial gene therapy protocol using a mixture of IL-12, pro-IL-18, and ICE cDNAs can confer potent antitumor activities against established TS/A tumors via cytotoxic CD8+ T cells and IFN-γ-dependent pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer modelling promises to. be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The 'spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/- 50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations. (c) 2006 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics simulations have been used to study the phase behavior of a dipalmitoylphosphatidylcholine (DPPC)/palmitic acid (PA)/water 1:2:20 mixture in atomic detail. Starting from a random solution of DPPC and PA in water, the system adopts either a gel phase at temperatures below similar to 330 K or an inverted hexagonal phase above similar to 330 K in good agreement with experiment. It has also been possible to observe the direct transformation from a gel to an inverted hexagonal phase at elevated temperature (similar to 390 K). During this transformation, a metastable fluid lamellar intermediate is observed. Interlamellar connections or stalks form spontaneously on a nanosecond time scale and subsequently elongate, leading to the formation of an inverted hexagonal phase. This work opens the possibility of studying in detail how the formation of nonlamellar phases is affected by lipid composition and (fusion) peptides and, thus, is an important step toward understanding related biological processes, such as membrane fusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.