982 resultados para CHEMICAL-BOND ANALYSIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of major feed resources was conducted in four crop-livestock mixed farming systems of central southern Ethiopia, with 90 farmers, selected using multi-stage purposive and random sampling methods. Discussions were held with focused groups and key informants for vernacular name identification of feed, followed by feed sampling to analyse chemical composition (CP, ADF and NDF), in-vitro dry matter digestibility (IVDMD), and correlate with indigenous technical knowledge (ITK). Native pastures, crop residues (CR) and multi-purpose trees (MPT) are the major feed resources, demonstrated great variations in seasonality, chemical composition and IVDMD. The average CP, NDF and IVDMD values for grasses were 83.8 (ranged: 62.9–190), 619 (ranged: 357–877) and 572 (ranged: 317–743) g kg^(−1) DM, respectively. Likewise, the average CP, NDF and IVDMD for CR were 58 (ranged: 20–90), 760 (ranged: 340–931) and 461 (ranged: 285–637)g kg^(−1) DM, respectively. Generally, the MPT and non-conventional feeds (NCF, Ensete ventricosum and Ipomoea batatas) possessed higher CP (ranged: 155–164 g kg^(−1) DM) and IVDMD values (611–657 g kg^(−1) DM) while lower NDF (331–387 g kg^(−1) DM) and ADF (321–344 g kg^(−1) DM) values. The MPT and NCF were ranked as the best nutritious feeds by ITK while crop residues were the least. This study indicates that there are remarkable variations within and among forage resources in terms of chemical composition. There were also complementarities between ITK and feed laboratory results, and thus the ITK need to be taken into consideration in evaluation of local feed resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A targeted, stimuli-responsive, polymeric drug delivery vehicle is being developed in our lab to help alleviate severe side-effects caused by narrow therapeutic window drugs. Targeting specific cell types or organs via proteins, specifically, lectin-mediated targeting holds potential due to the high specificity and affinity of receptor-ligand interactions, rapid internalization, and relative ease of processing. Dextran, a commercially available, biodegradable polymer has been conjugated to doxorubicin and galactosamine to target hepatocytes in a three-step, one-pot synthesis. The loading of doxorubicin and galactose on the conjugates was determined by absorbance at 485 nm and elemental analysis, respectively. Conjugation efficiency based on the amount loaded of each reactant varies from 20% to 50% for doxorubicin and from 2% to 20% for galactosamine. Doxorubicin has also been attached to dextran through an acid-labile hydrazide bond. Doxorubicin acts by intercalating with DNA in the nuclei of cells. The fluorescence of doxorubicin is quenched when it binds to DNA. This allows a fluorescence-based cell-free assay to evaluate the efficacy of the polymer conjugates where we measure the fluorescence of doxorubicin and the conjugates in increasing concentrations of calf thymus DNA. Fluorescence quenching indicates that our conjugates can bind to DNA. The degree of binding increases with polymer molecular weight and substitution of doxorubicin. In cell culture experiments with hepatocytes, the relative uptake of polymer conjugates was evaluated using flow cytometry, and the killing efficiency was determined using the MTT cell proliferation assay. We have found that conjugate uptake is much lower than that of free doxorubicin. Lower uptake of conjugates may increase the maximum dose of drug tolerated by the body. Also, non-galactosylated conjugate uptake is lower than that of the galactosylated conjugate. Microscopy indicates that doxorubicin localizes almost exclusively at the nucleus, whereas the conjugates are present throughout the cell. Doxorubicin linked to dextran through a hydrazide bond was used to achieve improved killing efficiency. Following uptake, the doxorubicin dissociates from the polymer in an endosomal compartment and diffuses to the nucleus. The LC₅₀ of covalently linked doxorubicin is 7.4 μg/mL, whereas that of hydrazide linked doxorubicin is 4.4 μg/mL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We shall call an n × p data matrix fully-compositional if the rows sum to a constant, and sub-compositional if the variables are a subset of a fully-compositional data set1. Such data occur widely in archaeometry, where it is common to determine the chemical composition of ceramic, glass, metal or other artefacts using techniques such as neutron activation analysis (NAA), inductively coupled plasma spectroscopy (ICPS), X-ray fluorescence analysis (XRF) etc. Interest often centres on whether there are distinct chemical groups within the data and whether, for example, these can be associated with different origins or manufacturing technologies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principal component analysis allow to model compositional changes compared with a reference point. The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are two principal chemical concepts that are important for studying the natural environment. The first one is thermodynamics, which describes whether a system is at equilibrium or can spontaneously change by chemical reactions. The second main concept is how fast chemical reactions (kinetics or rate of chemical change) take place whenever they start. In this work we examine a natural system in which both thermodynamics and kinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 in superficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system in which natural and antrophic effects both contribute to highly modify the chemical composition of water. Thermodynamical modelling based on the reduction-oxidation reactions involving the passage NH+4 -> NO−2 -> NO−3 in equilibrium conditions has allowed to determine the Eh redox potential values able to characterise the state of each sample and, consequently, of the fluid environment from which it was drawn. Just as pH expresses the concentration of H+ in solution, redox potential is used to express the tendency of an environment to receive or supply electrons. In this context, oxic environments, as those of river systems, are said to have a high redox potential because O2 is available as an electron acceptor. Principles of thermodynamics and chemical kinetics allow to obtain a model that often does not completely describe the reality of natural systems. Chemical reactions may indeed fail to achieve equilibrium because the products escape from the site of the rection or because reactions involving the trasformation are very slow, so that non-equilibrium conditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understood catalytic effects or to surface effects, while variables as concentration (a large number of chemical species can coexist and interact concurrently), temperature and pressure can have large gradients in natural systems. By taking into account this, data of 91 water samples have been modelled by using statistical methodologies for compositional data. The application of log–contrast analysis has allowed to obtain statistical parameters to be correlated with the calculated Eh values. In this way, natural conditions in which chemical equilibrium is hypothesised, as well as underlying fast reactions, are compared with those described by a stochastic approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The chemical composition of sediments and rocks, as well as their distribution at the Martian surface, represent a long term archive of processes, which have formed the planetary surface. A survey of chemical compositions by means of Compositional Data Analysis represents a valuable tool to extract direct evidence for weathering processes and allows to quantify weathering and sedimentation rates. clr-biplot techniques are applied for visualization of chemical relationships across the surface (“chemical maps”). The variability among individual suites of data is further analyzed by means of clr-PCA, in order to extract chemical alteration vectors between fresh rocks and their crusts and for an assessment of different source reservoirs accessible to soil formation. Both techniques are applied to elucidate the influence of remote weathering by combined analysis of several soil forming branches. Vector analysis in the Simplex provides the opportunity to study atmosphere surface interactions, including the role and composition of volcanic gases

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several eco-toxicological studies have shown that insectivorous mammals, due to their feeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uence on essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P, S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greater white-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control (Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can give misleading results. Therefore, to improve the interpretation of the data obtained, we used statistical techniques for compositional data analysis to define groups of metals and to evaluate the relationships between them, from an inter-population viewpoint. Hypothesis testing on the adequate balance-coordinates allow us to confirm intuition based hypothesis and some previous results. The main statistical goal was to test equal means of balance-coordinates for the two defined populations. After checking normality, one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El proyecto de investigación parte de la dinámica del modelo de distribución tercerizada para una compañía de consumo masivo en Colombia, especializada en lácteos, que para este estudio se ha denominado “Lactosa”. Mediante datos de panel con estudio de caso, se construyen dos modelos de demanda por categoría de producto y distribuidor y mediante simulación estocástica, se identifican las variables relevantes que inciden sus estructuras de costos. El problema se modela a partir del estado de resultados por cada uno de los cuatro distribuidores analizados en la región central del país. Se analiza la estructura de costos y el comportamiento de ventas dado un margen (%) de distribución logístico, en función de las variables independientes relevantes, y referidas al negocio, al mercado y al entorno macroeconómico, descritas en el objeto de estudio. Entre otros hallazgos, se destacan brechas notorias en los costos de distribución y costos en la fuerza de ventas, pese a la homogeneidad de segmentos. Identifica generadores de valor y costos de mayor dispersión individual y sugiere uniones estratégicas de algunos grupos de distribuidores. La modelación con datos de panel, identifica las variables relevantes de gestión que inciden sobre el volumen de ventas por categoría y distribuidor, que focaliza los esfuerzos de la dirección. Se recomienda disminuir brechas y promover desde el productor estrategias focalizadas a la estandarización de procesos internos de los distribuidores; promover y replicar los modelos de análisis, sin pretender remplazar conocimiento de expertos. La construcción de escenarios fortalece de manera conjunta y segura la posición competitiva de la compañía y sus distribuidores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Genetic and epigenetic factors interacting with the environment over time are the main causes of complex diseases such as autoimmune diseases (ADs). Among the environmental factors are organic solvents (OSs), which are chemical compounds used routinely in commercial industries. Since controversy exists over whether ADs are caused by OSs, a systematic review and meta-analysis were performed to assess the association between OSs and ADs. Methods and Findings: The systematic search was done in the PubMed, SCOPUS, SciELO and LILACS databases up to February 2012. Any type of study that used accepted classification criteria for ADs and had information about exposure to OSs was selected. Out of a total of 103 articles retrieved, 33 were finally included in the meta-analysis. The final odds ratios (ORs) and 95% confidence intervals (CIs) were obtained by the random effect model. A sensitivity analysis confirmed results were not sensitive to restrictions on the data included. Publication bias was trivial. Exposure to OSs was associated to systemic sclerosis, primary systemic vasculitis and multiple sclerosis individually and also to all the ADs evaluated and taken together as a single trait (OR: 1.54; 95% CI: 1.25-1.92; p-value, 0.001). Conclusion: Exposure to OSs is a risk factor for developing ADs. As a corollary, individuals with non-modifiable risk factors (i.e., familial autoimmunity or carrying genetic factors) should avoid any exposure to OSs in order to avoid increasing their risk of ADs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we use the most representative models that exist in the literature on term structure of interest rates. In particular, we explore affine one factor models and polynomial-type approximations such as Nelson and Siegel. Our empirical application considers monthly data of USA and Colombia for estimation and forecasting. We find that affine models do not provide adequate performance either in-sample or out-of-sample. On the contrary, parsimonious models such as Nelson and Siegel have adequate results in-sample, however out-of-sample they are not able to systematically improve upon random walk base forecast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous results concerning radiative emission under laser irradiation of silicon nanopowder are reinterpreted in terms of thermal emission. A model is developed that considers the particles in the powder as independent, so under vacuum the only dissipation mechanism is thermal radiation. The supralinear dependence observed between the intensity of the emitted radiation and laser power is predicted by the model, as is the exponential quenching when the gas pressure around the sample increases. The analysis allows us to determine the sample temperature. The local heating of the sample has been assessed independently by the position of the transverse optical Raman mode. Finally, it is suggested that the photoluminescence observed in porous silicon and similar materials could, in some cases, be blackbody radiation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thorough critical analysis of the theoretical relationships between the bond-angle dispersion in a-Si, Δθ, and the width of the transverse optical Raman peak, Γ, is presented. It is shown that the discrepancies between them are drastically reduced when unified definitions for Δθ and Γ are used. This reduced dispersion in the predicted values of Δθ together with the broad agreement with the scarce direct determinations of Δθ is then used to analyze the strain energy in partially relaxed pure a-Si. It is concluded that defect annihilation does not contribute appreciably to the reduction of the a-Si energy during structural relaxation. In contrast, it can account for half of the crystallization energy, which can be as low as 7 kJ/mol in defect-free a-Si

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational approach to the Hirshfeld [Theor. Chim. Acta 44, 129 (1977)] atom in a molecule is critically investigated, and several difficulties are highlighted. It is shown that these difficulties are mitigated by an alternative, iterative version, of the Hirshfeld partitioning procedure. The iterative scheme ensures that the Hirshfeld definition represents a mathematically proper information entropy, allows the Hirshfeld approach to be used for charged molecules, eliminates arbitrariness in the choice of the promolecule, and increases the magnitudes of the charges. The resulting "Hirshfeld-I charges" correlate well with electrostatic potential derived atomic charges