944 resultados para Syatematic derivation of monopole solutions
Resumo:
Relatively homogeneous oxygen isotope compositions of amphibole, clinopyroxene, and olivine separates (+5.2 to +5.7parts per thousand relative to VSMOW) and neodymium isotope compositions (epsilon(Nd(T)) = -0.9 to -1.8 for primary magmatic minerals and epsilon(Nd(T)) = -0.1 and -0.5 for mineral separates from late-stage pegmatites and hydrothermal veins) from the alkaline to agpaitic llimaussaq intrusion, South Greenland, indicate a closed system evolution of this igneous complex and support a mantle derivation of the magma. In contrast to the homogeneous oxygen and neodymium isotopic data, deltaD values for hand-picked amphibole separates vary between -92 and -232parts per thousand and are among the most deuterium-depleted values known from igneous amphiboles. The calculated fluid phase coexisting with these amphiboles has a homogeneous oxygen isotopic composition within the normal range of magmatic waters, but extremely heterogeneous and low D/H ratios, implying a decoupling of the oxygen- and hydrogen isotope systems. Of the several possibilities that can account for such unusually low deltaD values in amphiboles (e.g., late-stage hydrothermal exchange with meteoric water, extensive magmatic degassing, contamination with organic matter, and/or effects of Fe-content and pressure on amphibole-water fractionation) the most likely explanation for the range in deltaD values is that the amphiboles have been influenced by secondary interaction and reequilibration with D-depleted fluids obtained through late-magmatic oxidation of internally generated CH(4) and/or H(2). This interpretation is consistent with the known occurrence of abundant magmatic CH(4) in the Ilimaussaq rocks and with previous studies on the isotopic compositions of the rocks and fluids. Copyright (C) 2004 Elsevier Ltd.
Resumo:
The mechanisms in the Nash program for cooperative games are madecompatible with the framework of the theory of implementation. This is donethrough a reinterpretation of the characteristic function that avoids feasibilityproblems, thereby allowing an analysis that focuses exclusively on the payoff space. In this framework, we show that the core is the only majorcooperative solution that is Maskin monotonic. Thus, implementation of mostcooperative solutions must rely on refinements of the Nash equilibrium concept(like most papers in the Nash program do). Finally, the mechanisms in theNash program are adapted into the model.
Resumo:
Aims: To provide 12-month prevalence and disability burden estimates of a broad range of mental and neurological disorders in the European Union (EU) and to compare these findings to previous estimates. Referring to our previous 2005 review, improved up-to-date data for the enlarged EU on a broader range of disorders than previously covered are needed for basic, clinical and public health research and policy decisions and to inform about the estimated number of persons affected in the EU. Method: Stepwise multi-method approach, consisting of systematic literature reviews, reanalyses of existing data sets, national surveys and expert consultations. Studies and data from all member states of the European Union (EU-27) plus Switzerland, Iceland and Norway were included. Supplementary information about neurological disorders is provided, although methodological constraints prohibited the derivation of overall prevalence estimates for mental and neurological disorders. Disease burden was measured by disability adjusted life years (DALY). Results: Prevalence: It is estimated that each year 38.2% of the EU population suffers from a mental disorder. Adjusted for age and comorbidity, this corresponds to 164.8 million persons affected. Compared to 2005 (27.4%) this higher estimate is entirely due to the inclusion of 14 new disorders also covering childhood/adolescence as well as the elderly. The estimated higher number of persons affected (2011: 165 m vs. 2005: 82 m) is due to coverage of childhood and old age populations, new disorders and of new EU membership states. The most frequent disorders are anxiety disorders (14.0%), insomnia (7.0%), major depression (6.9%), somatoform (6.3%), alcohol and drug dependence (>4%), ADHD (5%) in the young, and dementia (1-30%, depending on age). Except for substance use disorders and mental retardation, there were no substantial cultural or country variations. Although many sources, including national health insurance programs, reveal increases in sick leave, early retirement and treatment rates due to mental disorders, rates in the community have not increased with a few exceptions (i.e. dementia). There were also no consistent indications of improvements with regard to low treatment rates, delayed treatment provision and grossly inadequate treatment. Disability: Disorders of the brain and mental disorders in particular, contribute 26.6% of the total all cause burden, thus a greater proportion as compared to other regions of the world. The rank order of the most disabling diseases differs markedly by gender and age group; overall, the four most disabling single conditions were: depression, dementias, alcohol use disorders and stroke. Conclusion: In every year over a third of the total EU population suffers from mental disorders. The true size of "disorders of the brain" including neurological disorders is even considerably larger. Disorders of the brain are the largest contributor to the all cause morbidity burden as measured by DALY in the EU. No indications for increasing overall rates of mental disorders were found nor of improved care and treatment since 2005; less than one third of all cases receive any treatment, suggesting a considerable level of unmet needs. We conclude that the true size and burden of disorders of the brain in the EU was significantly underestimated in the past.Concerted priority action is needed at all levels, including substantially increased funding for basic, clinical and public health research in order to identify better strategies for improved prevention and treatment for isorders of the brain as the core health challenge of the 21st century. (C) 2011 Published by Elsevier B.V.
Resumo:
A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
Species' geographic ranges are usually considered as basic units in macroecology and biogeography, yet it is still difficult to measure them accurately for many reasons. About 20 years ago, researchers started using local data on species' occurrences to estimate broad scale ranges, thereby establishing the niche modeling approach. However, there are still many problems in model evaluation and application, and one of the solutions is to find a consensus solution among models derived from different mathematical and statistical models for niche modeling, climatic projections and variable combination, all of which are sources of uncertainty during niche modeling. In this paper, we discuss this approach of ensemble forecasting and propose that it can be divided into three phases with increasing levels of complexity. Phase I is the simple combination of maps to achieve a consensual and hopefully conservative solution. In Phase II, differences among the maps used are described by multivariate analyses, and Phase III consists of the quantitative evaluation of the relative magnitude of uncertainties from different sources and their mapping. To illustrate these developments, we analyzed the occurrence data of the tiger moth, Utetheisa ornatrix (Lepidoptera, Arctiidae), a Neotropical moth species, and modeled its geographic range in current and future climates.
Resumo:
Digital holography microscopy (DHM) is an optical technique which provides phase images yielding quantitative information about cell structure and cellular dynamics. Furthermore, the quantitative phase images allow the derivation of other parameters, including dry mass production, density, and spatial distribution. We have applied DHM to study the dry mass production rate and the dry mass surface density in wild-type and mutant fission yeast cells. Our study demonstrates the applicability of DHM as a tool for label-free quantitative analysis of the cell cycle and opens the possibility for its use in high-throughput screening.
Resumo:
OBJECTIVE: To validate a revision of the Mini Nutritional Assessment short-form (MNA(R)-SF) against the full MNA, a standard tool for nutritional evaluation. METHODS: A literature search identified studies that used the MNA for nutritional screening in geriatric patients. The contacted authors submitted original datasets that were merged into a single database. Various combinations of the questions on the current MNA-SF were tested using this database through combination analysis and ROC based derivation of classification thresholds. RESULTS: Twenty-seven datasets (n=6257 participants) were initially processed from which twelve were used in the current analysis on a sample of 2032 study participants (mean age 82.3y) with complete information on all MNA items. The original MNA-SF was a combination of six questions from the full MNA. A revised MNA-SF included calf circumference (CC) substituted for BMI performed equally well. A revised three-category scoring classification for this revised MNA-SF, using BMI and/or CC, had good sensitivity compared to the full MNA. CONCLUSION: The newly revised MNA-SF is a valid nutritional screening tool applicable to geriatric health care professionals with the option of using CC when BMI cannot be calculated. This revised MNA-SF increases the applicability of this rapid screening tool in clinical practice through the inclusion of a "malnourished" category.
Resumo:
We study the minimal class of exact solutions of the Saffman-Taylor problem with zero surface tension, which contains the physical fixed points of the regularized (nonzero surface tension) problem. New fixed points are found and the basin of attraction of the Saffman-Taylor finger is determined within that class. Specific features of the physics of finger competition are identified and quantitatively defined, which are absent in the zero surface tension case. This has dramatic consequences for the long-time asymptotics, revealing a fundamental role of surface tension in the dynamics of the problem. A multifinger extension of microscopic solvability theory is proposed to elucidate the interplay between finger widths, screening and surface tension.
Resumo:
It was shown by Weyl that the general static axisymmetric solution of the vacuum Einstein equations in four dimensions is given in terms of a single axisymmetric solution of the Laplace equation in three-dimensional flat space. Weyls construction is generalized here to arbitrary dimension D>~4. The general solution of the D-dimensional vacuum Einstein equations that admits D-2 orthogonal commuting non-null Killing vector fields is given either in terms of D-3 independent axisymmetric solutions of Laplaces equation in three-dimensional flat space or by D-4 independent solutions of Laplaces equation in two-dimensional flat space. Explicit examples of new solutions are given. These include a five-dimensional asymptotically flat black ring with an event horizon of topology S1S2 held in equilibrium by a conical singularity in the form of a disk.
Resumo:
We propose a criterion for the validity of semiclassical gravity (SCG) which is based on the stability of the solutions of SCG with respect to quantum metric fluctuations. We pay special attention to the two-point quantum correlation functions for the metric perturbations, which contain both intrinsic and induced fluctuations. These fluctuations can be described by the Einstein-Langevin equation obtained in the framework of stochastic gravity. Specifically, the Einstein-Langevin equation yields stochastic correlation functions for the metric perturbations which agree, to leading order in the large N limit, with the quantum correlation functions of the theory of gravity interacting with N matter fields. The homogeneous solutions of the Einstein-Langevin equation are equivalent to the solutions of the perturbed semiclassical equation, which describe the evolution of the expectation value of the quantum metric perturbations. The information on the intrinsic fluctuations, which are connected to the initial fluctuations of the metric perturbations, can also be retrieved entirely from the homogeneous solutions. However, the induced metric fluctuations proportional to the noise kernel can only be obtained from the Einstein-Langevin equation (the inhomogeneous term). These equations exhibit runaway solutions with exponential instabilities. A detailed discussion about different methods to deal with these instabilities is given. We illustrate our criterion by showing explicitly that flat space is stable and a description based on SCG is a valid approximation in that case.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Hygiene practices in neonatal units require the use of disinfecting solutions containing ethanol or isopropanol. Newly disinfected hands or soaked swabs introduced inside the incubators may emit vapours leading to alcohol exposures to the neonates. Alcohol emissions from hands and other occasional sources (e.g. soaked disinfecting swabs) lead to measurable levels of vapours inside incubators. Average isopropanol and ethanol concentrations ranging from 33.1 to 171.4 mg/m(3) (13.8 to 71.4 ppm) and from 23.5 to more than 146 mg/m3 (9.8 to > 6 ppm) respectively were measured inside occupied incubators (n = 11, measurement time about 230 min) in a neonatal unit of the Centre Hospitalier Universitaire Vaudois in Lausanne during regular activity. Exposure concentrations in a wide range of possible situations were then investigated by modeling using the one-box dispersion model. Theoretical modeling suggested typical isopropanol peaks and average concentrations ranging between 10(2) and 10(3) mg/m(3) (4.10(1) to 4.10(2)ppm), and 10(1) to 10(2) mg/m(3) (4 to 4.10(1) ppm), respectively. Based on our results we suggest several preventive measures to reduce the neonates' exposures to solvent vapours.
Resumo:
Peripheral T-cell lymphomas (PTCLs) represent a heterogeneous group of more than 20 neoplastic entities derived from mature T cells and natural killer (NK) cells involved in innate and adaptive immunity. With few exceptions these malignancies, which may present as disseminated, predominantly extranodal or cutaneous, or predominantly nodal diseases, are clinically aggressive and have a dismal prognosis. Their diagnosis and classification is hampered by several difficulties, including a significant morphological and immunophenotypic overlap across different entities, and the lack of characteristic genetic alterations for most of them. Although there is increasing evidence that the cell of origin is a major determinant for the delineation of several PTCL entities, however, the cellular derivation of most entities remains poorly characterized and/or may be heterogeneous. The complexity of the biology and pathophysiology of PTCLs has been only partly deciphered. In recent years, novel insights have been gained from genome-wide profiling analyses. In this review, we will summarize the current knowledge on the pathobiological features of peripheral NK/T-cell neoplasms, with a focus on selected disease entities manifesting as tissue infiltrates primarily in extranodal sites and lymph nodes.
Resumo:
This work proposes a parallel architecture for a motion estimation algorithm. It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great numbers of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. Due to its regular processing scheme, parallel implementation of correspondence problem can be an adequate approach to reduce the computation time. This work introduces parallel and real-time implementation of such low-level tasks to be carried out from the moment that the current image is acquired by the camera until the pairs of point-matchings are detected