914 resultados para Feynman-Kac formula Markov semigroups principal eigenvalue


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We reconsider a formula for arbitrary moments of expected discounted dividend payments in a spectrally negative L,vy risk model that was obtained in Renaud and Zhou (2007, [4]) and in Kyprianou and Palmowski (2007, [3]) and extend the result to stationary Markov processes that are skip-free upwards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims determinate the evaluation of the quality of 'Nanicão' banana, submitted to two conditions of storage temperature and three different kinds of package, using the technique of the Analysis of Principal Components (ACP), as a basis for an Analysis of Variance. The fruits used were 'Nanicão' bananas, at ripening degree 3, that is, more green than yellow. The packages tested were: "Torito" wood boxes, load capacity: 18 kg; "½ box" wood boxes, load capacity: 13 kg; and cardboard boxes, load capacity: 18 kg. The temperatures assessed were: room temperature (control); and (13±1ºC), with humidity controlled to 90±2,5%. Fruits were discarded when a sensory analysis determined they had become unfit for consumption. Peel coloration, percentages of imperfection, fresh mass, total acidity, pH, total soluble solids and percentages of sucrose were assessed. A completely randomized design with a 2-factorial treatment structure (packing X temperature) was used. The obtained data were analyzed through a multivariate analysis known as Principal Components Analysis, using S-plus 4.2. The conclusion was that the best packages to preserve the fruit were the ½ box ones, which proves that it is necessary to reduce the number of fruits per package to allow better ventilation and decreases mechanical injuries and ensure quality for more time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Falls are common in the elderly, and potentially result in injury and disability. Thus, preventing falls as soon as possible in older adults is a public health priority, yet there is no specific marker that is predictive of the first fall onset. We hypothesized that gait features should be the most relevant variables for predicting the first fall. Clinical baseline characteristics (e.g., gender, cognitive function) were assessed in 259 home-dwelling people aged 66 to 75 that had never fallen. Likewise, global kinetic behavior of gait was recorded from 22 variables in 1036 walking tests with an accelerometric gait analysis system. Afterward, monthly telephone monitoring reported the date of the first fall over 24 months. A principal components analysis was used to assess the relationship between gait variables and fall status in four groups: non-fallers, fallers from 0 to 6 months, fallers from 6 to 12 months and fallers from 12 to 24 months. The association of significant principal components (PC) with an increased risk of first fall was then evaluated using the area under the Receiver Operator Characteristic Curve (ROC). No effect of clinical confounding variables was shown as a function of groups. An eigenvalue decomposition of the correlation matrix identified a large statistical PC1 (termed "Global kinetics of gait pattern"), which accounted for 36.7% of total variance. Principal component loadings also revealed a PC2 (12.6% of total variance), related to the "Global gait regularity." Subsequent ANOVAs showed that only PC1 discriminated the fall status during the first 6 months, while PC2 discriminated the first fall onset between 6 and 12 months. After one year, any PC was associated with falls. These results were bolstered by the ROC analyses, showing good predictive models of the first fall during the first six months or from 6 to 12 months. Overall, these findings suggest that the performance of a standardized walking test at least once a year is essential for fall prevention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pigmenttipäällystyksen tarkoituksena on parantaa painopapereiden pintaominaisuuksia. Tämän työn tarkoituksena oli löytää sopiva päällystyspasta päällystetylle coldset-paperille. Kirjallisuusosassa on käsitelty coldset-painatusta ja sen ongelmia. Päällystysmenetelmän perusteita, pastan ominaisuuksia ja niiden vaikutusta päällystystulokseen on myös käsitelty. Lisäksi on esitelty joitakin päällystetyn paperin pinnantutkimusmenetelmiä. Kokeellisessa osassa on tutkittu erilaisten pastakoostumusten ja päällystemäärien sekä kalanteroinnin vaikutusta paperin painettavuuteen. Paperit on päällystetty Helicoaterilla ja joitakin pastoja on testattu myös pilot-mittakaavaisessa päällystyksessä. Selitystä paperin käyttäytymiseen painatuksessa on etsitty päällystetyn paperin pintarakenteesta. Paras painettavuus saavutetaan päällysteellä, jossa pigmenttinä on vain karbonaatti. Painojälkeä voidaan parantaa käyttämällä kalsinoitua kaoliinia yhdessä karbonaatin kanssa, mutta tämän päällysteen pintalujuus ei ole riittävä CSWO-painatukseen. Tärkkipigmentti parantaa veden ja painovärin absorptiota ja siten tekee painetun tuotteen kuivemmaksi ja miellyttävämmän tuntuiseksi, mutta aiheuttaa smearingia. Tämä johtuu liian nopeasta musteen asettuvuudesta. "Pehmeä" SB-lateksi soveltuu paremmin offset-painatukseen kuin "kova" lateksi, joka sisältää myös PVAc:ta. "Pehmeällä" lateksilla saadaan parempi pintalujuus ja painojälki kuin "kovalla" lateksilla. Paperin pölyävyyttä painatuksessa voidaan vähentää nostamalla päällystemäärää ja laskemalla pastan kuiva-ainepitoisuutta. Kalanteroinnilla ei pintalujuutta tai painojälkeä voida parantaa. Selitys tutkimuksessa käsiteltyjen papereiden painojäljelle ja painettavuudelle löydetään tutkimalla päällysteen pintarakennetta. Painojälkeen vaikuttaa eniten päällysteen peittoaste. Huonoa peittävyyttä voidaan parantaa nostamalla päällystemäärää. Pölyäminen painatuksessa johtuu pigmenteistä, jotka eivät ole sidottuja paperin pintaan. Tämä taas johtuu pastan huonosta vesiretentiosta. Hyödyllisintä tietoa näiden papereiden pintarakenteesta saadaan tutkimalla pintaa pyyhkäisyelektonimikroskoopilla (SEM), atomivoimamikroskoopilla (AFM) ja laserindusoidulla plasmaspektrometrilla (LIPS). LIPSin etuna on se, että päällystemääräjaukauma voidaan määrittää sekä x-y- että z-suunnassa samanaikaisesti samasta kohdasta. LIPSissä myös näytteen preparointitarve on hyvin vähäinen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND & AIMS: The standard liver volume (SLV) is widely used in liver surgery, especially for living donor liver transplantation (LDLT). All the reported formulas for SLV use body surface area or body weight, which can be influenced strongly by the general condition of the patient. METHODS: We analyzed the liver volumes of 180 Japanese donor candidates and 160 Swiss patients with normal livers to develop a new formula. The dataset was randomly divided into two subsets, the test and validation sample, stratified by race. The new formula was validated using 50 LDLT recipients. RESULTS: Without using body weight-related variables, age, thoracic width measured using computed tomography, and race independently predicted the total liver volume (TLV). A new formula: 203.3-(3.61×age)+(58.7×thoracic width)-(463.7×race [1=Asian, 0=Caucasian]), most accurately predicted the TLV in the validation dataset as compared with any other formulas. The graft volume for LDLT was correlated with the postoperative prothrombin time, and the graft volume/SLV ratio calculated using the new formula was significantly better correlated with the postoperative prothrombin time than the graft volume/SLV ratio calculated using the other formulas or the graft volume/body weight ratio. CONCLUSIONS: The new formula derived using the age, thoracic width and race predicted both the TLV in the healthy patient group and the SLV in LDLT recipients more accurately than any other previously reported formulas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the results of a research effort developed in Laboratório de Avaliação e Síntese de Substancias Bioativas (LASSBio, UFRJ) in the utilization of Brazilian abundant natural product, safrole (1), the principal chemical constituent of Sassafras oil (Ocotea pretiosa), as an attractive synthon to access different chemical class of bioactive compounds, as prostaglandins analogues, non-steroidal antiinflammatory agents and antithrombotic compounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Varhaislapsuuden virusinfektioiden, lehmänmaitopohjaisen äidinmaitovastikeen ja geneettisen alttiuden merkitys diabetekseen liittyvän autoimmuniteetin kehittymisessä Tyypin 1 diabetes on autoimmuunisairaus, joka syntyy haiman insuliinia tuottavien beta-solujen tuhouduttua elimistön oman immuunipuolustusjärjestelmän hyökkäyksen seurauksena. Sekä perimän että ympäristötekijöiden arvellaan vaikuttavan tautiprosessiin, mutta taudin tarkkaa syntymekanismia ei tunneta. Tutkimuksen tarkoituksena oli selvittää varhaislapsuuden ympäristötekijöiden vaikutusta beta-soluautoimmuniteetin syntyyn, erityispaino tutkimuksessa oli ympäristötekijöiden yhteisvaikutuksessa sekä geneettisten riskitekijöiden ja ympäristötekijöiden vuorovaikutuksessa. Varhaislapsuudessa sairastettu sytomegalovirus- tai enterovirusinfektio ei lisännyt beta-soluautoimmuniteetin riskiä lapsilla, joilla on geneettisesti kohonnut riski sairastua tyypin 1 diabetekseen. Ennen puolen vuoden ikää sairastettu rotavirusinfektio lisäsi hieman tyypin 1 diabetekseen liittyvän autoimmuniteetin riskiä. Tarkemmassa analyysissa varhaislapsuuden enterovirusinfektio osoittautui kuitenkin autovasta-aineiden muodostumisen riskitekijäksi niiden lasten joukossa, jotka olivat saaneet lehmänmaitopohjaista äidinmaidon vastiketta ensimmäisten elinkuukausien aikana. Tämä löydös viittaa enterovirusinfektion ja lehmänmaitopohjaisen vastikkeen yhteisvaikutukseen tyypin 1 diabetekseen liittyvän autoimmuniteetin synnyssä. Löydösten mukaan PTPN22 geenin C1858T polymorfismi vaikuttaa CD4+ T solujen aktivaatioon ja proliferaatiovasteeseen, 1858T alleeliin liittyy alentunut T-soluresepto-rivälitteinen aktivaatio. 1858T alleelin kantajuuteen liittyy lisäksi lisääntynyt autovasta-aineiden ja kliinisen diabeteksen ilmaantuvuus. Tämä yhteys rajoittui yksilöihin, jotka olivat altistuneet lehmänmaitopohjaiselle vastikkeelle ennen kuuden kuukauden ikää. Tulosten mukaan sekä ympäristötekijöiden väliset yhteisvaikutukset että perimä vaikuttavat yksittäisen ympäristötekijän merkitykseen tyypin 1 diabetekseen liittyvän autoimmuniteetin synnyssä. Nämä yhteisvaikutukset ympäristötekijöiden kesken ja perimän ja ympäristötekijöiden välillä selittävät aiemmin julkaistujen tulosten ristiriittaisuutta tutkimuksissa, joissa on analysoitu vain yhden ympäristötekijän vaikutusta diabeteksen ilmaantuvuuteen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A continuous random variable is expanded as a sum of a sequence of uncorrelated random variables. These variables are principal dimensions in continuous scaling on a distance function, as an extension of classic scaling on a distance matrix. For a particular distance, these dimensions are principal components. Then some properties are studied and an inequality is obtained. Diagonal expansions are considered from the same continuous scaling point of view, by means of the chi-square distance. The geometric dimension of a bivariate distribution is defined and illustrated with copulas. It is shown that the dimension can have the power of continuum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mimicry is a central plank of the emotional contagion theory; however, it was only tested with facial and postural emotional stimuli. This study explores the existence of mimicry in voice-to-voice communication by analyzing 8,747 sequences of emotional displays between customers and employees in a call-center context. We listened live to 967 telephone inter-actions, registered the sequences of emotional displays, and analyzed them with a Markov chain. We also explored other propositions of emotional contagion theory that were yet to be tested in vocal contexts. Results supported that mimicry is significantly present at all levels. Our findings fill an important gap in the emotional contagion theory; have practical implications regarding voice-to-voice interactions; and open doors for future vocal mimicry research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invocatio: I.N.D.O.M.