967 resultados para eutectoid decomposition
Resumo:
A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals
Resumo:
In this paper, we characterize the non-emptiness of the equity core (Selten, 1978) and provide a method, easy to implement, for computing the Lorenz-maximal allocations in the equal division core (Dutta-Ray, 1991). Both results are based on a geometrical decomposition of the equity core as a finite union of polyhedrons. Keywords: Cooperative game, equity core, equal division core, Lorenz domination. JEL classification: C71
Resumo:
The Computational Biophysics Group at the Universitat Pompeu Fabra (GRIB-UPF) hosts two unique computational resources dedicated to the execution of large scale molecular dynamics (MD) simulations: (a) the ACMD molecular-dynamics software, used on standard personal computers with graphical processing units (GPUs); and (b) the GPUGRID. net computing network, supported by users distributed worldwide that volunteer GPUs for biomedical research. We leveraged these resources and developed studies, protocols and open-source software to elucidate energetics and pathways of a number of biomolecular systems, with a special focus on flexible proteins with many degrees of freedom. First, we characterized ion permeation through the bactericidal model protein Gramicidin A conducting one of the largest studies to date with the steered MD biasing methodology. Next, we addressed an open problem in structural biology, the determination of drug-protein association kinetics; we reconstructed the binding free energy, association, and dissaciociation rates of a drug like model system through a spatial decomposition and a Makov-chain analysis. The work was published in the Proceedings of the National Academy of Sciences and become one of the few landmark papers elucidating a ligand-binding pathway. Furthermore, we investigated the unstructured Kinase Inducible Domain (KID), a 28-peptide central to signalling and transcriptional response; the kinetics of this challenging system was modelled with a Markovian approach in collaboration with Frank Noe’s group at the Freie University of Berlin. The impact of the funding includes three peer-reviewed publication on high-impact journals; three more papers under review; four MD analysis components, released as open-source software; MD protocols; didactic material, and code for the hosting group.
Resumo:
To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.
Resumo:
Little attention has been paid so far to the influence of the chemical nature of the substance when measuring δ 15N by elemental analysis (EA)-isotope ratio mass spectrometry (IRMS). Although the bulk nitrogen isotope analysis of organic material is not to be questioned, literature from different disciplines using IRMS provides hints that the quantitative conversion of nitrate into nitrogen presents difficulties. We observed abnormal series of δ 15N values of laboratory standards and nitrates. These unexpected results were shown to be related to the tailing of the nitrogen peak of nitrate-containing compounds. A series of experiments were set up to investigate the cause of this phenomenon, using ammonium nitrate (NH4NO3) and potassium nitrate (KNO3) samples, two organic laboratory standards as well as the international secondary reference materials IAEA-N1, IAEA-N2-two ammonium sulphates [(NH4)2SO4]-and IAEA-NO-3, a potassium nitrate. In experiment 1, we used graphite and vanadium pentoxide (V2O5) as additives to observe if they could enhance the decomposition (combustion) of nitrates. In experiment 2, we tested another elemental analyser configuration including an additional section of reduced copper in order to see whether or not the tailing could originate from an incomplete reduction process. Finally, we modified several parameters of the method and observed their influence on the peak shape, δ 15N value and nitrogen content in weight percent of nitrogen of the target substances. We found the best results using mere thermal decomposition in helium, under exclusion of any oxygen. We show that the analytical procedure used for organic samples should not be used for nitrates because of their different chemical nature. We present the best performance given one set of sample introduction parameters for the analysis of nitrates, as well as for the ammonium sulphate IAEA-N1 and IAEA-N2 reference materials. We discuss these results considering the thermochemistry of the substances and the analytical technique itself. The results emphasise the difference in chemical nature of inorganic and organic samples, which necessarily involves distinct thermochemistry when analysed by EA-IRMS. Therefore, they should not be processed using the same analytical procedure. This clearly impacts on the way international secondary reference materials should be used for the calibration of organic laboratory standards.
Resumo:
In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.
Resumo:
The paper first presents a 10-year outlook for major Asian dairy markets (China, India, Indonesia, Japan, South Korea, Malaysia, the Philippines, Thailand, and Vietnam) based on a world dairy model. Then, using Heien and Wessells’s technique, dairy product consumption growth is decomposed into contributions generated by income growth, population growth, price change, and urbanization and these contributions are quantified. Using the world dairy model, the paper also analyzes the impacts of alternative assumptions of higher income levels and technology development in Asia on Asian dairy consumptions and world dairy prices. The outlook projects that Asian dairy consumption will continue to grow strongly in the next decade. The consumption decomposition suggests that the growth would be mostly driven by income and population growth and, as a result, would raise world dairy prices. The simulation results show that technology improvement in Asian countries would dampen world dairy prices and meanwhile boost domestic dairy consumption.
Resumo:
AIMS: High-mobility group box 1 (HMGB1) is a nuclear protein actively secreted by immune cells and passively released by necrotic cells that initiates pro-inflammatory signalling through binding to the receptor for advance glycation end-products. HMGB1 has been established as a key inflammatory mediator during myocardial infarction, but the proximal mechanisms responsible for myocardial HMGB1 expression and release in this setting remain unclear. Here, we investigated the possible involvement of peroxynitrite, a potent cytotoxic oxidant formed during myocardial infarction, on these processes. METHODS AND RESULTS: The ability of peroxynitrite to induce necrosis and HMGB1 release in vitro was evaluated in H9c2 cardiomyoblasts and in primary murine cardiac cells (myocytes and non-myocytes). In vivo, myocardial HMGB1 expression and nitrotyrosine content (a marker of peroxynitrite generation) were determined following myocardial ischaemia and reperfusion in rats, whereas peroxynitrite formation was inhibited by two different peroxynitrite decomposition catalysts: 5,10,15,20-tetrakis(4-sulphonatophenyl) porphyrinato iron (III) (FeTPPS) or Mn(III)-tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP). In all types of cells studied, peroxynitrite (100 μM) elicited significant necrosis, the loss of intracellular HMGB1, and its passive release into the medium. In vivo, myocardial ischaemia-reperfusion induced significant myocardial necrosis, cardiac nitrotyrosine formation, and marked overexpression of myocardial HMGB1. FeTPPS reduced nitrotyrosine, decreased infarct size, and suppressed HMGB1 overexpression, an effect that was similarly obtained with MnTBAP. CONCLUSION: These findings indicate that peroxynitrite represents a key mediator of HMGB1 overexpression and release by cardiac cells and provide a novel mechanism linking myocardial oxidative/nitrosative stress with post-infarction myocardial inflammation.
The impotence of price controls: failed attempts to constrain pharmaceutical expenditures in Greece.
Resumo:
BACKGROUND: While the prices of pharmaceuticals are relatively low in Greece, expenditure on them is growing more rapidly than almost anywhere else in the European Union. OBJECTIVE: To describe and explain the rise in drug expenditures through decomposition of the increase into the contribution of changes in prices, in volumes and a product-mix effect. METHODS: The decomposition of the growth in pharmaceutical expenditures in Greece over the period 1991-2006 was conducted using data from the largest social insurance fund (IKA) that covers more than 50% of the population. RESULTS: Real drug spending increased by 285%, despite a 58% decrease in the relative price of pharmaceuticals. The increase in expenditure is mainly attributable to a switch to more innovative, but more expensive, pharmaceuticals, indicated by a product-mix residual of 493% in the decomposition. A rising volume of drugs also plays a role, and this is due to an increase in the number of prescriptions issued per doctor visit, rather than an increase in the number of visits or the population size. CONCLUSIONS: Rising pharmaceutical expenditures are strongly determined by physicians' prescribing behaviour, which is not subject to any monitoring and for which there are no incentives to be cost conscious.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
An affine asset pricing model in which traders have rational but heterogeneous expectations aboutfuture asset prices is developed. We use the framework to analyze the term structure of interestrates and to perform a novel three-way decomposition of bond yields into (i) average expectationsabout short rates (ii) common risk premia and (iii) a speculative component due to heterogeneousexpectations about the resale value of a bond. The speculative term is orthogonal to public informationin real time and therefore statistically distinct from common risk premia. Empirically wefind that the speculative component is quantitatively important accounting for up to a percentagepoint of yields, even in the low yield environment of the last decade. Furthermore, allowing for aspeculative component in bond yields results in estimates of historical risk premia that are morevolatile than suggested by standard Affine Gaussian term structure models which our frameworknests.