950 resultados para Decomposition of Ranked Models
Resumo:
Excessive free-radical production due to various bacterial components released during bacterial infection has been linked to cell death and tissue injury. Peroxynitrite is a highly reactive oxidant produced by the combination of nitric oxide (NO) and superoxide anion, which has been implicated in cell death and tissue injury in various forms of critical illness. Pharmacological decomposition of peroxynitrite may represent a potential therapeutic approach in diseases associated with the overproduction of NO and superoxide. In the present study, we tested the effect of a potent peroxynitrite decomposition catalyst in murine models of endotoxemia and sepsis. Mice were injected i.p. with LPS 40 mg/kg with or without FP15 [Fe(III) tetrakis-2-(N-triethylene glycol monomethyl ether) pyridyl porphyrin] (0.1, 0.3, 1, 3, or 10 mg/kg per hour). Mice were killed 12 h later, followed by the harvesting of samples from the lung, liver, and gut for malondialdehyde and myeloperoxidase measurements. In other subsets of animals, blood samples were obtained by cardiac puncture at 1.5, 4, and 8 h after LPS administration for cytokine (TNF-alpha, IL-1 beta, and IL-10), nitrite/nitrate, alanine aminotransferase, and blood urea nitrogen measurements. Endotoxemic animals showed an increase in survival from 25% to 80% at the FP15 doses of 0.3 and 1 mg/kg per hour. The same dose of FP15 had no effect on plasma levels of nitrite/nitrate. There was a reduction in liver and lung malondialdehyde in the endotoxemic animals pretreated with FP15, as well as in hepatic myeloperoxidase and biochemical markers of liver and kidney damage (alanine aminotransferase and blood urea nitrogen). In a bacterial model of sepsis induced by cecal ligation and puncture, FP15 treatment (0.3 mg/kg per day) significantly protected against mortality. The current data support the view that peroxynitrite is a critical factor mediating liver, gut, and lung injury in endotoxemia and septic shock: its pharmacological neutralization may be of therapeutic benefit.
Resumo:
The aim of this study was to calibrate the CENTURY, APSIM and NDICEA simulation models for estimating decomposition and N mineralization rates of plant organic materials (Arachis pintoi, Calopogonium mucunoides, Stizolobium aterrimum, Stylosanthes guyanensis) for 360 days in the Atlantic rainforest bioma of Brazil. The models´ default settings overestimated the decomposition and N-mineralization of plant residues, underlining the fact that the models must be calibrated for use under tropical conditions. For example, the APSIM model simulated the decomposition of the Stizolobium aterrimum and Calopogonium mucunoides residues with an error rate of 37.62 and 48.23 %, respectively, by comparison with the observed data, and was the least accurate model in the absence of calibration. At the default settings, the NDICEA model produced an error rate of 10.46 and 14.46 % and the CENTURY model, 21.42 and 31.84 %, respectively, for Stizolobium aterrimum and Calopogonium mucunoides residue decomposition. After calibration, the models showed a high level of accuracy in estimating decomposition and N- mineralization, with an error rate of less than 20 %. The calibrated NDICEA model showed the highest level of accuracy, followed by the APSIM and CENTURY. All models performed poorly in the first few months of decomposition and N-mineralization, indicating the need of an additional parameter for initial microorganism growth on the residues that would take the effect of leaching due to rainfall into account.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
Enhanced release of CO2 to the atmosphere from soil organic carbon as a result of increased temperatures may lead to a positive feedback between climate change and the carbon cycle, resulting in much higher CO2 levels and accelerated lobal warming. However, the magnitude of this effect is uncertain and critically dependent on how the decomposition of soil organic C (heterotrophic respiration) responds to changes in climate. Previous studies with the Hadley Centre’s coupled climate–carbon cycle general circulation model (GCM) (HadCM3LC) used a simple, single-pool soil carbon model to simulate the response. Here we present results from numerical simulations that use the more sophisticated ‘RothC’ multipool soil carbon model, driven with the same climate data. The results show strong similarities in the behaviour of the two models, although RothC tends to simulate slightly smaller changes in global soil carbon stocks for the same forcing. RothC simulates global soil carbon stocks decreasing by 54 GtC by 2100 in a climate change simulation compared with an 80 GtC decrease in HadCM3LC. The multipool carbon dynamics of RothC cause it to exhibit a slower magnitude of transient response to both increased organic carbon inputs and changes in climate. We conclude that the projection of a positive feedback between climate and carbon cycle is robust, but the magnitude of the feedback is dependent on the structure of the soil carbon model.
Resumo:
Taphonomic studies regularly employ animal analogues for human decomposition due to ethical restrictions relating to the use of human tissue. However, the validity of using animal analogues in soil decomposition studies is still questioned. This study compared the decomposition of skeletal muscle tissues (SMTs) from human (Homo sapiens), pork (Sus scrofa), beef (Bos taurus), and lamb (Ovis aries) interred in soil microcosms. Fixed interval samples were collected from the SMT for microbial activity and mass tissue loss determination; samples were also taken from the underlying soil for pH, electrical conductivity, and nutrient (potassium, phosphate, ammonium, and nitrate) analysis. The overall patterns of nutrient fluxes and chemical changes in nonhuman SMT and the underlying soil followed that of human SMT. Ovine tissue was the most similar to human tissue in many of the measured parameters. Although no single analogue was a precise predictor of human decomposition in soil, all models offered close approximations in decomposition dynamics.
Resumo:
Tin on the oxide form, alone or doped with others metals, has been extensively used as gas sensor, thus, this work reports on the preparation and kinetic parameters regarding the thermal decomposition of Sn(II)-ethylenediaminetetraacetate as precursor to SnO2. Thus, the acquaintance with the kinetic model regarding the thermal decomposition of the tin complex may leave the door open to foresee, whether it is possible to get thin film of SnO2 using Sn(II)-EDTA as precursor besides the influence of dopants added.The Sn(II)-EDTA soluble complex was prepared in aqueous medium by adding of tin(II) chloride acid solution to equimolar amount of ammonium salt from EDTA under N-2 atmosphere and temperature of 50degreesC arising the pH similar to 4. The compound was crystallized in ethanol at low-temperature and filtered to eliminate the chloride ions, obtaining the heptacoordinated chelate with the composition H2SnH2O(CH2N(CH2COO)(2))(2).0.5H(2)O.Results from TG, DTG and DSC curves under inert and oxidizing atmospheres indicate the presence of water coordinated to the metal and that the ethylenediamine fraction is thermally more stable than carboxylate groups. The final residue from thermal decomposition was the SnO2 characterized by X-ray as a tetragonal rutile phase.Applying the isoconversional Wall-Flynn-Ozawa method on the DSC curves, average activation energy: E-a = 183.7 +/- 12.7 and 218.9 +/- 2.1 kJ mol(-1), and pre-exponential factor: log A = 18.85 +/- 0.27 and 19.10 +/- 0.27 min(-1), at 95% confidence level, could be obtained, regarding the loss of coordinated water and thermal decomposition of the carboxylate groups, respectively. The E-a and logA also could be obtained applying isoconventional Wall-Flynn method on the TG curves.From E-a and log A values, Dollimore and Malek procedures could be applied suggesting R3 (contracting volume) and SB (two-parameter model) as the kinetic model to the loss of coordinated water (177-244degreesC) and thermal decomposition of the carboxylate groups (283-315degreesC), respectively. Simulated and experimental normalized DTG and DSC curves besides analysis of residuals check these kinetic models. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A number of hydrological, botanical, macro- and micro-climatological processes are involved in the formation of patterned peatlands. La Grande Tsa at 2336 m a.s.l. is probably the highest bog in the central Swiss Alps and is unique in its pattern. In two of five pools there is in the contact zone between the basal peat and the overlying gyttja an unconformity in the depth-age models based on radiocarbon dates. Palynostratigraphies of cores from a ridge and a pool confirm the occurrence of an unconformity in the contact zone. We conclude that deepening of the pools results from decomposition of peat. The fact that the dated unconformities in the two pools and the unconformity in the ridge-core all fall within the Bronze Age suggest they were caused by events external to the bog. We hypothesize that early transhumance resulted in anthropogenic lowering of the timberline, which resulted in a reduction in the leaf-area index and evapotranspiration, and in higher water levels and thus pool formation.
Resumo:
The ordinal logistic regression models are used to analyze the dependant variable with multiple outcomes that can be ranked, but have been underutilized. In this study, we describe four logistic regression models for analyzing the ordinal response variable. ^ In this methodological study, the four regression models are proposed. The first model uses the multinomial logistic model. The second is adjacent-category logit model. The third is the proportional odds model and the fourth model is the continuation-ratio model. We illustrate and compare the fit of these models using data from the survey designed by the University of Texas, School of Public Health research project PCCaSO (Promoting Colon Cancer Screening in people 50 and Over), to study the patient’s confidence in the completion colorectal cancer screening (CRCS). ^ The purpose of this study is two fold: first, to provide a synthesized review of models for analyzing data with ordinal response, and second, to evaluate their usefulness in epidemiological research, with particular emphasis on model formulation, interpretation of model coefficients, and their implications. Four ordinal logistic models that are used in this study include (1) Multinomial logistic model, (2) Adjacent-category logistic model [9], (3) Continuation-ratio logistic model [10], (4) Proportional logistic model [11]. We recommend that the analyst performs (1) goodness-of-fit tests, (2) sensitivity analysis by fitting and comparing different models.^
Resumo:
Abstract The ultimate problem considered in this thesis is modeling a high-dimensional joint distribution over a set of discrete variables. For this purpose, we consider classes of context-specific graphical models and the main emphasis is on learning the structure of such models from data. Traditional graphical models compactly represent a joint distribution through a factorization justi ed by statements of conditional independence which are encoded by a graph structure. Context-speci c independence is a natural generalization of conditional independence that only holds in a certain context, speci ed by the conditioning variables. We introduce context-speci c generalizations of both Bayesian networks and Markov networks by including statements of context-specific independence which can be encoded as a part of the model structures. For the purpose of learning context-speci c model structures from data, we derive score functions, based on results from Bayesian statistics, by which the plausibility of a structure is assessed. To identify high-scoring structures, we construct stochastic and deterministic search algorithms designed to exploit the structural decomposition of our score functions. Numerical experiments on synthetic and real-world data show that the increased exibility of context-specific structures can more accurately emulate the dependence structure among the variables and thereby improve the predictive accuracy of the models.
Resumo:
A temperature pause introduced in a simple single-step thermal decomposition of iron, with the presence of silver seeds formed in the same reaction mixture, gives rise to novel compact heterostructures: brick-like Ag@Fe3O4 core-shell nanoparticles. This novel method is relatively easy to implement, and could contribute to overcome the challenge of obtaining a multifunctional heteroparticle in which a noble metal is surrounded by magnetite. Structural analyses of the samples show 4 nm silver nanoparticles wrapped within compact cubic external structures of Fe oxide, with curious rectangular shape. The magnetic properties indicate a near superparamagnetic like behavior with a weak hysteresis at room temperature. The value of the anisotropy involved makes these particles candidates to potential applications in nanomedicine.
Resumo:
The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.
Resumo:
The enzyme purine nucleoside phosphorylase from Schistosoma mansoni (SmPNP) is an attractive molecular target for the treatment of major parasitic infectious diseases, with special emphasis on its role in the discovery of new drugs against schistosomiasis, a tropical disease that affects millions of people worldwide. In the present work, we have determined the inhibitory potency and developed descriptor- and fragment-based quantitative structure-activity relationships (QSAR) for a series of 9-deazaguanine analogs as inhibitors of SmPNP. Significant statistical parameters (descriptor-based model: r² = 0.79, q² = 0.62, r²pred = 0.52; and fragment-based model: r² = 0.95, q² = 0.81, r²pred = 0.80) were obtained, indicating the potential of the models for untested compounds. The fragment-based model was then used to predict the inhibitory potency of a test set of compounds, and the predicted values are in good agreement with the experimental results
Resumo:
The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC) were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner) and 0.80 ± 0.19 (inter-examiner). The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05), although the differences were considered clinically insignificant (differences < 0.1 mm). The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.
Resumo:
The decomposition of peroxynitrite to nitrite and dioxygen at neutral pH follows complex kinetics, compared to its isomerization to nitrate at low pH. Decomposition may involve radicals or proceed by way of the classical peracid decomposition mechanism. Peroxynitrite (ONOOH/ONOO(-)) decomposition has been proposed to involve formation of peroxynitrate (O(2)NOOH/O(2)NOO(-)) at neutral pH (D. Gupta, B. Harish, R. Kissner and W. H. Koppenol, Dalton Trans., 2009, DOI: 10.1039/b905535e, see accompanying paper in this issue). Peroxynitrate is unstable and decomposes to nitrite and dioxygen. This study aimed to investigate whether O(2)NOO(-) formed upon ONOOH/ONOO(-) decomposition generates singlet molecular oxygen [O(2) ((1)Delta(g))]. As unequivocally revealed by the measurement of monomol light emission in the near infrared region at 1270 nm and by chemical trapping experiments, the decomposition of ONOO(-) or O(2)NOOH at neutral to alkaline pH generates O(2) ((1)Delta(g)) at a yield of ca. 1% and 2-10%, respectively. Characteristic light emission, corresponding to O(2) ((1)Delta(g)) monomolecular decay was observed for ONOO(-) and for O(2)NOOH prepared by reaction of H(2)O(2) with NO(2)BF(4) and of H(2)O(2) with NO(2)(-) in HClO(4). The generation of O(2) ((1)Delta(g)) from ONOO(-) increased in a concentration-dependent manner in the range of 0.1-2.5 mM and was dependent on pH, giving a sigmoid pro. le with an apparent pK(a) around pD 8.1 (pH 7.7). Taken together, our results clearly identify the generation of O(2) ((1)Delta(g)) from peroxynitrate [O(2)NOO(-) -> NO(2)(-) + O(2) ((1)Delta(g))] generated from peroxynitrite and also from the reactions of H(2)O(2) with either NO(2)BF(4) or NO(2)(-) in acidic media.
Resumo:
Functional magnetic resonance imaging (fMRI) has become an important tool in Neuroscience due to its noninvasive and high spatial resolution properties compared to other methods like PET or EEG. Characterization of the neural connectivity has been the aim of several cognitive researches, as the interactions among cortical areas lie at the heart of many brain dysfunctions and mental disorders. Several methods like correlation analysis, structural equation modeling, and dynamic causal models have been proposed to quantify connectivity strength. An important concept related to connectivity modeling is Granger causality, which is one of the most popular definitions for the measure of directional dependence between time series. In this article, we propose the application of the partial directed coherence (PDC) for the connectivity analysis of multisubject fMRI data using multivariate bootstrap. PDC is a frequency domain counterpart of Granger causality and has become a very prominent tool in EEG studies. The achieved frequency decomposition of connectivity is useful in separating interactions from neural modules from those originating in scanner noise, breath, and heart beating. Real fMRI dataset of six subjects executing a language processing protocol was used for the analysis of connectivity. Hum Brain Mapp 30:452-461, 2009. (C) 2007 Wiley-Liss, Inc.