942 resultados para equilibrium asset pricing models with latent variables
Resumo:
BACKGROUND: Theory of mind (ToM), the capacity to infer the intention, beliefs and emotional states of others, is frequently impaired in behavioural variant fronto-temporal dementia patients (bv-FTDp); however, its impact on caregiver burden is unexplored. SETTING: National Institute of Neurological Disorders and Stroke, National Institutes of Health. SUBJECTS: bv-FTDp (n = 28), a subgroup of their caregivers (n = 20) and healthy controls (n = 32). METHODS: we applied a faux-pas (FP) task as a ToM measure in bv-FTDp and healthy controls and the Zarit Burden Interview as a measure of burden in patients' caregivers. Patients underwent structural MRI; we used voxel-based morphometry to examine relationships between regional atrophy and ToM impairment and caregiver burden. RESULTS: FP task performance was impaired in bv-FTDp and negatively associated with caregiver burden. Atrophy was found in areas involved in ToM. Caregiver burden increased with greater atrophy in left lateral premotor cortex, a region associated in animal models with the presence of mirror neurons, possibly involved in empathy. CONCLUSION: ToM impairment in bv-FTDp is associated with increased caregiver burden.
Resumo:
Nowadays, Species Distribution Models (SDMs) are a widely used tool. Using different statistical approaches these models reconstruct the realized niche of a species using presence data and a set of variables, often topoclimatic. There utilization range is quite large from understanding single species requirements, to the creation of nature reserve based on species hotspots, or modeling of climate change impact, etc... Most of the time these models are using variables at a resolution of 50km x 50km or 1 km x 1 km. However in some cases these models are used with resolutions below the kilometer scale and thus called high resolution models (100 m x 100 m or 25 m x 25 m). Quite recently a new kind of data has emerged enabling precision up to lm x lm and thus allowing very high resolution modeling. However these new variables are very costly and need an important amount of time to be processed. This is especially the case when these variables are used in complex calculation like models projections over large areas. Moreover the importance of very high resolution data in SDMs has not been assessed yet and is not well understood. Some basic knowledge on what drive species presence-absences is still missing. Indeed, it is not clear whether in mountain areas like the Alps coarse topoclimatic gradients are driving species distributions or if fine scale temperature or topography are more important or if their importance can be neglected when balance to competition or stochasticity. In this thesis I investigated the importance of very high resolution data (2-5m) in species distribution models using either very high resolution topographic, climatic or edaphic variables over a 2000m elevation gradient in the Western Swiss Alps. I also investigated more local responses of these variables for a subset of species living in this area at two precise elvation belts. During this thesis I showed that high resolution data necessitates very good datasets (species and variables for the models) to produce satisfactory results. Indeed, in mountain areas, temperature is the most important factor driving species distribution and needs to be modeled at very fine resolution instead of being interpolated over large surface to produce satisfactory results. Despite the instinctive idea that topographic should be very important at high resolution, results are mitigated. However looking at the importance of variables over a large gradient buffers the importance of the variables. Indeed topographic factors have been shown to be highly important at the subalpine level but their importance decrease at lower elevations. Wether at the mountane level edaphic and land use factors are more important high resolution topographic data is more imporatant at the subalpine level. Finally the biggest improvement in the models happens when edaphic variables are added. Indeed, adding soil variables is of high importance and variables like pH are overpassing the usual topographic variables in SDMs in term of importance in the models. To conclude high resolution is very important in modeling but necessitate very good datasets. Only increasing the resolution of the usual topoclimatic predictors is not sufficient and the use of edaphic predictors has been highlighted as fundamental to produce significantly better models. This is of primary importance, especially if these models are used to reconstruct communities or as basis for biodiversity assessments. -- Ces dernières années, l'utilisation des modèles de distribution d'espèces (SDMs) a continuellement augmenté. Ces modèles utilisent différents outils statistiques afin de reconstruire la niche réalisée d'une espèce à l'aide de variables, notamment climatiques ou topographiques, et de données de présence récoltées sur le terrain. Leur utilisation couvre de nombreux domaines allant de l'étude de l'écologie d'une espèce à la reconstruction de communautés ou à l'impact du réchauffement climatique. La plupart du temps, ces modèles utilisent des occur-rences issues des bases de données mondiales à une résolution plutôt large (1 km ou même 50 km). Certaines bases de données permettent cependant de travailler à haute résolution, par conséquent de descendre en dessous de l'échelle du kilomètre et de travailler avec des résolutions de 100 m x 100 m ou de 25 m x 25 m. Récemment, une nouvelle génération de données à très haute résolution est apparue et permet de travailler à l'échelle du mètre. Les variables qui peuvent être générées sur la base de ces nouvelles données sont cependant très coûteuses et nécessitent un temps conséquent quant à leur traitement. En effet, tout calcul statistique complexe, comme des projections de distribution d'espèces sur de larges surfaces, demande des calculateurs puissants et beaucoup de temps. De plus, les facteurs régissant la distribution des espèces à fine échelle sont encore mal connus et l'importance de variables à haute résolution comme la microtopographie ou la température dans les modèles n'est pas certaine. D'autres facteurs comme la compétition ou la stochasticité naturelle pourraient avoir une influence toute aussi forte. C'est dans ce contexte que se situe mon travail de thèse. J'ai cherché à comprendre l'importance de la haute résolution dans les modèles de distribution d'espèces, que ce soit pour la température, la microtopographie ou les variables édaphiques le long d'un important gradient d'altitude dans les Préalpes vaudoises. J'ai également cherché à comprendre l'impact local de certaines variables potentiellement négligées en raison d'effets confondants le long du gradient altitudinal. Durant cette thèse, j'ai pu monter que les variables à haute résolution, qu'elles soient liées à la température ou à la microtopographie, ne permettent qu'une amélioration substantielle des modèles. Afin de distinguer une amélioration conséquente, il est nécessaire de travailler avec des jeux de données plus importants, tant au niveau des espèces que des variables utilisées. Par exemple, les couches climatiques habituellement interpolées doivent être remplacées par des couches de température modélisées à haute résolution sur la base de données de terrain. Le fait de travailler le long d'un gradient de température de 2000m rend naturellement la température très importante au niveau des modèles. L'importance de la microtopographie est négligeable par rapport à la topographie à une résolution de 25m. Cependant, lorsque l'on regarde à une échelle plus locale, la haute résolution est une variable extrêmement importante dans le milieu subalpin. À l'étage montagnard par contre, les variables liées aux sols et à l'utilisation du sol sont très importantes. Finalement, les modèles de distribution d'espèces ont été particulièrement améliorés par l'addition de variables édaphiques, principalement le pH, dont l'importance supplante ou égale les variables topographique lors de leur ajout aux modèles de distribution d'espèces habituels.
Resumo:
BACKGROUND: Numerous studies have examined determinants leading to preponderance of women in major depressive disorder (MDD), which is particularly accentuated for the atypical depression subtype. It is thus of interest to explore the specific indirect effects influencing the association between sex and established depression subtypes. METHODS: The data of 1624 subjects with a lifetime diagnosis of MDD derived from the population-based PsyCoLaus data were used. An atypical (n=256), a melancholic (n=422), a combined atypical and melancholic features subtype (n=198), and an unspecified MDD group (n=748) were constructed according to the DSM-IV specifiers. Path models with direct and indirect effects were applied to the data. RESULTS: Partial mediation of the female-related atypical and combined atypical-melancholic depression subtypes was found. Early anxiety disorders and high emotion-orientated coping acted as mediating variables between sex and the atypical depression subtype. In contrast, high Body Mass Index (BMI) served as a suppression variable, also concerning the association between sex and the combined atypical-melancholic subtype. The latter association was additionally mediated by an early age of MDD onset and early/late anxiety disorders. LIMITATIONS: The use of cross-sectional data does not allow causal conclusions. CONCLUSIONS: This is the first study that provides evidence for a differentiation of the general mechanisms explaining sex differences of overall MDD by depression subtypes. Determinants affecting the pathways begin early in life. Since some of them are primarily of behavioral nature, the present findings could be a valuable target in mental health care.
Resumo:
Understanding the factors that shape adaptive genetic variation across species niches has become of paramount importance in evolutionary ecology, especially to understand how adaptation to changing climate affects the geographic range of species. The distribution of adaptive alleles in the ecological niche is determined by the emergence of novel mutations, their fitness consequences and gene flow that connects populations across species niches. Striking demographical differences and source sink dynamics of populations between the centre and the margin of the niche can play a major role in the emergence and spread of adaptive alleles. Although some theoretical predictions have long been proposed, the origin and distribution of adaptive alleles within species niches remain untested. In this paper, we propose and discuss a novel empirical approach that combines landscape genetics with species niche modelling, to test whether alleles that confer local adaptation are more likely to occur in either marginal or central populations of species niches. We illustrate this new approach by using a published data set of 21 alpine plant species genotyped with a total of 2483 amplified fragment length polymorphisms (AFLP), distributed over more than 1733 sampling sites across the Alps. Based on the assumption that alleles that were statistically associated with environmental variables were adaptive, we found that adaptive alleles in the margin of a species niche were also present in the niche centre, which suggests that adaptation originates in the niche centre. These findings corroborate models of species range evolution, in which the centre of the niche contributes to the emergence of novel adaptive alleles, which diffuse towards niche margins and facilitate niche and range expansion through subsequent local adaptation. Although these results need to be confirmed via fitness measurements in natural populations and functionally characterised genetic sequences, this study provides a first step towards understanding how adaptive genetic variation emerges and shapes species niches and geographic ranges along environmental gradients.
Resumo:
A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences).
Resumo:
BACKGROUND: The purpose of this study was to confirm the prognostic value of pancreatic stone protein (PSP) in patients with severe infections requiring ICU management and to develop and validate a model to enhance mortality prediction by combining severity scores with biomarkers. METHODS: We enrolled prospectively patients with severe sepsis or septic shock in mixed tertiary ICUs in Switzerland (derivation cohort) and Brazil (validation cohort). Severity scores (APACHE [Acute Physiology and Chronic Health Evaluation] II or Simplified Acute Physiology Score [SAPS] II) were combined with biomarkers obtained at the time of diagnosis of sepsis, including C-reactive-protein, procalcitonin (PCT), and PSP. Logistic regression models with the lowest prediction errors were selected to predict in-hospital mortality. RESULTS: Mortality rates of patients with septic shock enrolled in the derivation cohort (103 out of 158) and the validation cohort (53 out of 91) were 37% and 57%, respectively. APACHE II and PSP were significantly higher in dying patients. In the derivation cohort, the models combining either APACHE II, PCT, and PSP (area under the receiver operating characteristic curve [AUC], 0.721; 95% CI, 0.632-0.812) or SAPS II, PCT, and PSP (AUC, 0.710; 95% CI, 0.617-0.802) performed better than each individual biomarker (AUC PCT, 0.534; 95% CI, 0.433-0.636; AUC PSP, 0.665; 95% CI, 0.572-0.758) or severity score (AUC APACHE II, 0.638; 95% CI, 0.543-0.733; AUC SAPS II, 0.598; 95% CI, 0.499-0.698). These models were externally confirmed in the independent validation cohort. CONCLUSIONS: We confirmed the prognostic value of PSP in patients with severe sepsis and septic shock requiring ICU management. A model combining severity scores with PCT and PSP improves mortality prediction in these patients.
Resumo:
This study investigates the relationship between the time-varying risk premiums and conditional market risk in the stock markets of the ten member countries of Economy and Monetary Union. Second, it examines whether the conditional second moments change over time and are there asymmetric effects in the conditional covariance matrix. Third, it analyzes the possible effects of the chosen testing framework. Empirical analysis is conducted using asymmetric univariate and multivariate GARCH-in-mean models and assuming three different degrees of market integration. For a daily sample period from 1999 to 2007, the study shows that the time-varying market risk alone is not enough to explain the dynamics of risk premiums and indications are found that the market risk is detected only when its price is allowed to change over time. Also asymmetric effects in the conditional covariance matrix, which is found to be time-varying, are clearly present and should be recognized in empirical asset pricing analyses.
Resumo:
Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.
Resumo:
Dilutions of methylmetacrylate ranging between 1 and 50 ppm were obtained from a stock solution of 1 ml of monomer in 100 ml of deionised water, and were analyzed by an absorption spectrophotometer in the UV-visible. Absorbance values were used to develop a calibration model based on the PLS, with the aim to determine new sample concentrations. The number of latent variables used was 6, with the standard errors of calibration and prediction found to be 0,048 ml/100 ml and 0,058 ml/100 ml. The calibration model was successfully used to calculate the concentration of monomer released in water, where complete dentures were kept for one hour after polymerization.
Resumo:
Työn tavoitteena oli luoda yhtenäinen, tehokas ja helppokäyttöinen tarjouslaskentamalli sopimusvalmistajan käyttöön. Konsernin käytössä oli aiemmin neljä erilaista tarjouslaskentamallia ja niitä oli erittäin vaikea vertailla keskenään, eikä niillä voitu laskea tarjouksia kaikille tehtaille. Työ jakautuu kirjallisuuskatsaukseen ja empiiriseen osaan. Kirjallisuuskatsauksessa käsitellään erilaisia hinnoitteluvaihtoehtoja, kustannuslaskennan roolia ja kustannuslaskentamenetelmiä. Empiirinen osa koostuu tarjouslaskennan nykytilan analysoinnista, uuden tarjouslaskentamallin rakentamisesta sekä kehitetyn tarjouslaskentamallin toimintakuvauksesta. Tarjouslaskentamallin pohjaksi valittiin kustannusperusteinen hinnoittelu, koska sitä oli käytetty aikaisemminkin ja se oli koettu hyväksi. Kustannusperusteisen tarjouslaskentamallin pohjaksi luotiin yhtenäiset työ – ja konetuntihinnat koko konserniin. Konetuntihinnat laskettiin 28 koneelle. Konetuntihintoja analysoitiin ja ryhmiteltiin siten, että päädyttiin 16 konelisän käyttöön. Tarjouslaskentamalli käsittelee vain valmistuksen kustannuksia eikä ota kantaa hallintokustannusten syntyyn. Yhtenäinen tarjouslaskentamalli helpottaa eri myyntimiesten tekemien tarjousten vertailua ja mahdollistaa tuotteiden myynnin useille tehtaille. Tarjouslaskentamalli toteutettiin taulukkolaskentaohjelmalla.
Resumo:
The goal of this work is the development and validation of an analytical method for fast quantification of sibutramine in pharmaceutical formulations, using diffuse reflectance infrared spectroscopy and partial least square regression. The multivariate model was elaborated from 22 mixtures containing sibutramine and excipients (lactose, microcrystalline cellulose, colloidal silicon dioxide and magnesium stearate) and using fragmented (750-1150/ 1350-1500/ 1850-1950/ 2600-2900 cm-1) and smoothing spectral data. Using 10 latent variables, excellent predictive capacity were observed in the calibration (n=20, RMSEC=0.004, R= 0.999) and external validation (n=5, RMSEC= 9.36, R=0.999) phases. In the analysis of synthetic mixtures the precision (SD=3,47%) was compatible with the rules of the Agencia Nacional de Vigilância Sanitária (ANVISA-Brazil). In the analysis of commercial drugs good agreement was observed between spectroscopic and chromatographic methods.
Resumo:
In this work the antioxidant capacity of red wine samples was characterized by conventional spectroscopic and chromatographic methodologies, regarding chemical parameters like color, total polyphenolic and resveratrol content, and antioxidant activity. Additionally, multivariate calibration models were developed to predict the antioxidant activity, using partial least square regression and the spectral data registered between 400 and 800 nm. Even when a close correlation between the evaluated parameters has been expected many inconsistencies were observed, probably on account of the low selectivity of the conventional methodologies. Models developed from mean-centered spectra and using 4 latent variables allowed high prevision capacity of the antioxidant activity, permitting relative errors lower than 3%.
Resumo:
A multivariate spectrophotometric method was developed for analysis of kojic acid/hydroquinone associations in skin whitening cosmetics. The method is based on the reaction between kojic acid and Fe3+ and on the reduction of Fe3+ by hydroquinone and further complexation of Fe2+ with 1,10-phenanthroline. The multivariate model was developed by Partial Least Squares Regression (PLSR), using 25 synthetic mixtures and mean-centered spectral data (350-380 nm). The use of 3 (kojic acid) and 2 (hydroquinone) latent variables permits the observation of mean errors of about 5% in the external validation phase.
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
Sähkön tukkuhinnat vaihtelivat hyvin voimakkaasti talvella 2009–2010. Työssä on esitetty rajahyötytarkasteluna yhdistetyn sellu- ja paperitehtaan mahdollisuudet lisätä vastapainesähkön tuotantoa sekä leikata sähkön kulutusta aikana, jolloin sähkön tuntihinta on korkea. Työssä tarkastellaan myös erilaisia sähkömarkkinaoperaatioita. Työn ensimmäisessä osiossa esitellään Stora Enson Imatran tehtaat. Myöhemmissä kappaleissa perehdytään pohjoismaisten sähkömarkkinoiden sekä kaasupörssin toimintaan. Jotta korkeista sähkön hinnoista voitaisiin hyötyä, tulee sähkön myyntitarjoukset jättää sähköpörssi Nord Poolin kaupankäyntijärjestelmään toimitusvuorokautta edeltävänä päivänä. Tässä työssä on määritetty sähköenergian mahdolliset myyntivolyymit sekä hinnat eri tuotantotilanteissa. Työssä pyritään parantamaan tehtaan sähkökaupankäynnin kannattavuutta käyttämällä eri sähkökaupan tuotteita. Työssä esitetään malleja, joiden avulla korkeita sähkön markkinahintoja voidaan pyrkiä hyödyntämään. Eri sähkömarkkinatuotteet soveltuvat myös riskienhallintaan voimakkaasti vaihtelevilla sähkömarkkinoilla