45 resultados para second and third order ionospheric effects


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To complement the existing treatment guidelines for all tumour types, ESMO organises consensus conferences to focus on specific issues in each type of tumour. The 2nd ESMO Consensus Conference on Lung Cancer was held on 11-12 May 2013 in Lugano. A total of 35 experts met to address several questions on non-small-cell lung cancer (NSCLC) in each of four areas: pathology and molecular biomarkers, first-line/second and further lines of treatment in advanced disease, early-stage disease and locally advanced disease. For each question, recommendations were made including reference to the grade of recommendation and level of evidence. This consensus paper focuses on first line/second and further lines of treatment in advanced disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of novel effective immunotherapeutic agents and early clinical data hinting at significant activity in non-small cell lung cancer (NSCLC) has introduced yet another player in the field of management of advanced disease. At present, first-line cytotoxic chemotherapy is generally withheld pending results of molecular testing for any actionable genetic alteration that could lead to targeted treatment, and in their absence chemotherapy is prescribed as a default therapy. Phase III trials comparing head-to-head immune checkpoint inhibitors with standard platinum-based doublet chemotherapy are underway. Second-line chemotherapy is likewise being challenged in phase III trials, one of which having recently reported positive results in advanced squamous cell carcinoma. In tumors harboring actionable transforming genetic alterations such as EGFR mutations and ALK rearrangements, second- and third-generation inhibitors allow for multiple lines of targeted treatment beyond initial resistance, postponing the use of cytotoxic chemotherapy to very late lines of therapy. Chemotherapy as a longstanding but still present standard of care capable of prolonging survival, improving quality of life, and relieving symptoms sees its role increasingly restricted to clinical, immunological, and molecular subsets of patients where its activity and efficacy have never been tested prospectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous links between genetic variants and phenotypes are known and genome-wide association studies dramatically increased the number of genetic variants associated with traits during the last decade. However, how changes in the DNA perturb the molecular mechanisms and impact on the phenotype of an organism remains elusive. Studies suggest that many traitassociated variants are in the non-coding region of the genome and probably act through regulation of gene expression. During my thesis I investigated how genetic variants affect gene expression through gene regulatory mechanisms. The first chapter was a collaborative project with a pharmaceutical company, where we investigated genome-wide copy number variation (CNVs) among Cynomolgus monkeys (Macaca fascicularis) used in pharmaceutical studies, and associated them to changes in gene expression. We found substantial copy number variation and identified CNVs linked to tissue-specific expression changes of proximal genes. The second and third chapters focus on genetic variation in humans and its effects on gene regulatory mechanisms and gene expression. The second chapter studies two human trios, where the allelic effects of genetic variation on genome-wide gene expression, protein-DNA binding and chromatin modifications were investigated. We found abundant allele specific activity across all measured molecular phenotypes and show extended coordinated behavior among them. In the third chapter, we investigated the impact of genetic variation on these phenotypes in 47 unrelated individuals. We found that chromatin phenotypes are organized into local variable modules, often linked to genetic variation and gene expression. Our results suggest that chromatin variation emerges as a result of perturbations of cis-regulatory elements by genetic variants, leading to gene expression changes. The work of this thesis provides novel insights into how genetic variation impacts gene expression by perturbing regulatory mechanisms. -- De nombreux liens entre variations génétiques et phénotypes sont connus. Les études d'association pangénomique ont considérablement permis d'augmenter le nombre de variations génétiques associées à des phénotypes au cours de la dernière décennie. Cependant, comprendre comment ces changements perturbent les mécanismes moléculaires et affectent le phénotype d'un organisme nous échappe encore. Des études suggèrent que de nombreuses variations, associées à des phénotypes, sont situées dans les régions non codantes du génome et sont susceptibles d'agir en modifiant la régulation d'expression des gènes. Au cours de ma thèse, j'ai étudié comment les variations génétiques affectent les niveaux d'expression des gènes en perturbant les mécanismes de régulation de leur expression. Le travail présenté dans le premier chapitre est un projet en collaboration avec une société pharmaceutique. Nous avons étudié les variations en nombre de copies (CNV) présentes chez le macaque crabier (Macaca fascicularis) qui est utilisé dans les études pharmaceutiques, et nous les avons associées avec des changements d'expression des gènes. Nous avons découvert qu'il existe une variabilité substantielle du nombre de copies et nous avons identifié des CNVs liées aux changements d'expression des gènes situés dans leur voisinage. Ces associations sont présentes ou absentes de manière spécifique dans certains tissus. Les deuxième et troisième chapitres se concentrent sur les variations génétiques dans les populations humaines et leurs effets sur les mécanismes de régulation des gènes et leur expression. Le premier se penche sur deux trios humains, père, mère, enfant, au sein duquel nous avons étudié les effets alléliques des variations génétiques sur l'expression des gènes, les liaisons protéine-ADN et les modifications de la chromatine. Nous avons découvert que l'activité spécifique des allèles est abondante abonde dans tous ces phénotypes moléculaires et nous avons démontré que ces derniers ont un comportement coordonné entre eux. Dans le second, nous avons examiné l'impact des variations génétiques de ces phénotypes moléculaires chez 47 individus, sans lien de parenté. Nous avons observé que les phénotypes de la chromatine sont organisés en modules locaux, qui sont liés aux variations génétiques et à l'expression des gènes. Nos résultats suggèrent que la variabilité de la chromatine est due à des variations génétiques qui perturbent des éléments cis-régulateurs, et peut conduire à des changements dans l'expression des gènes. Le travail présenté dans cette thèse fournit de nouvelles pistes pour comprendre l'impact des différentes variations génétiques sur l'expression des gènes à travers les mécanismes de régulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RÉSUMÉ En combinant la perspective du parcours de vie à la théorie du stress et selon une approche psychosociale, cette thèse montre comment les expériences individuelles et collectives de victimisation ont marqué les parcours de vie, les croyances et le bien-être d'une cohorte de jeunes adultes ayant traversé les guerres en ex-Yougoslavie. Le premier article applique des analyses de courbes de croissance à classes latentes et dégage différentes trajectoires d'exclusion entre 1990 et 2006. L'analyse de ces trajectoires met en évidence les intersections entre vies individuelles, contexte et temps socio-historique et démontre que les expériences de guerre et les périodes d'exclusion socio-économique laissent des traces sur le bien-être à long terme. Les deuxième et troisième articles montrent que la croyance en un monde juste est ébranlée suite à des expériences de précarité socio-économique et de victimisation dues à la guerre au niveau individuel et contextuel. Un effet curvilinéaire et des interactions entre les niveaux indiquent que ces relations varient en fonction de l'intensité de la victimisation au niveau contextuel. Des effets de récence sont aussi relevés. Le quatrième article démontre que l'impact négatif de la victimisation sur le bien-être est en partie expliqué par un effritement de la croyance en un monde juste. De plus, si les individus qui croient davantage en un monde juste sont plus satisfaits de leur vie, la force de ce lien varie en fonction du niveau de victimisation dans certains contextes. Cette thèse présente un modèle multiniveaux dynamique dans lequel la croyance en un monde juste n'exerce plus le rôle de ressource personnelle stable mais s'érode face à la victimisation, entraînant ainsi un bien-être moindre. Ce travail souligne l'importance d'articuler les niveaux individuels et contextuels et de considérer la dimension temporelle pour expliquer les liens entre victimisation, croyance en un monde juste et bien-être. ABSTRACT By combining a life course perspective to stress theory and according to a psychosocial approach, this thesis shows how individual and collective victimisation experiences marked the life course, beliefs and well-being of a cohort of young adults who lived through the wars in former Yugoslavia. In the first article, latent class growth analyses were applied to identify different exclusion trajectories between 1990 and 2006. The analysis of these trajectories highlighted the intersections between individual lives, socio-historical context and time and demonstrated that experiences of war and socio-economic exclusion leave traces on well-being in the long term. The second and third articles showed that the belief in a just world was shattered due to socio-economic precariousness and war victimisation at individual and contextual levels. A curvilinear effect and cross-level interactions indicated that these relations varied according to the intensity of victimisation at the contextual level. Time effects were also noted. The fourth article showed that the negative impact of victimisation on well-being was partly explained by an erosion of the belief in a just world. Furthermore, if high believers were more satisfied with their lives, the strength of this relation varied depending on the level of victimisation in particular contexts. This thesis presents a multilevel dynamic model in which the belief in a just world no longer exercises the role of a stable personal resource but erodes in the face of victimisation, leading to a lower well-being. This work stresses the importance of articulating individual and contextual levels as well as considering the temporal dimension to explain the links between victimisation, belief in a just world and well-being.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares the structural/dynamics features of the wild-type alb-adrenergic receptor (AR) with those of the D142A active mutant and the agonist-bound state. The two active receptor forms were compared in their isolated states as well as in their ability to form homodimers and to recognize the G alpha q beta 1 gamma 2 heterotrimer. The analysis of the isolated structures revealed that, although the mutation- and agonist-induced active states of the alpha 1b-AR are different, they, however, share several structural peculiarities including (a) the release of some constraining interactions found in the wild-type receptor and (b) the opening of a cytosolic crevice formed by the second and third intracellular loops and the cytosolic extensions of helices 5 and 6. Accordingly, also their tendency to form homodimers shows commonalties and differences. In fact, in both the active receptor forms, helix 6 plays a crucial role in mediating homodimerization. However, the homodimeric models result from different interhelical assemblies. On the same line of evidence, in both of the active receptor forms, the cytosolic opened crevice recognizes similar domains on the G protein. However, the docking solutions are differently populated and the receptor-G protein preorientation models suggest that the final complexes should be characterized by different interaction patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

STATEMENT OF PROBLEM: The difficulty of identifying the ownership of lost dentures when found is a common and expensive problem in long term care facilities (LTCFs) and hospitals. PURPOSE: The purpose of this study was to evaluate the reliability of using radiofrequency identification (RFID) in the identification of dentures for LTCF residents after 3 and 6 months. MATERIAL AND METHODS: Thirty-eight residents of 2 LTCFs in Switzerland agreed to participate after providing informed consent. The tag was programmed with the family and first names of the participants and then inserted in the dentures. After placement of the tag, the information was read. A second and third assessment to review the functioning of the tag occurred at 3 and 6 months, and defective tags (if present) were reported and replaced. The data were analyzed with descriptive statistics. RESULTS: At the 3-month assessment of 34 residents (63 tags) 1 tag was unreadable and 62 tags (98.2%) were operational. At 6 months, the tags of 27 of the enrolled residents (50 tags) were available for review. No examined tag was defective at this time period. CONCLUSIONS: Within the limits of this study (number of patients, 6-month time span) RFID appears to be a reliable method of tracking and identifying dentures, with only 1 of 65 devices being unreadable at 3 months and 100% of 50 initially placed tags being readable at the end of the trial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The AMS 800 urinary control system is the gold standard for the treatment of urinary incontinence due to sphincter insufficiency. Despite excellent functional outcome and latest technological improvements, the revision rate remains significant. To overcome the shortcomings of the current device, we developed a modern electromechanical artificial urinary sphincter. The results demonstrated that this new sphincter is effective and well tolerated up to 3 months. This preliminary study represents a first step in the clinical application of novel technologies and an alternative compression mechanism to the urethra. OBJECTIVES: To evaluate the effectiveness in continence achievement of a new electromechanical artificial urinary sphincter (emAUS) in an animal model. To assess urethral response and animal general response to short-term and mid-term activation of the emAUS. MATERIALS AND METHODS: The principle of the emAUS is electromechanical induction of alternating compression of successive segments of the urethra by a series of cuffs activated by artificial muscles. Between February 2009 and May 2010 the emAUS was implanted in 17 sheep divided into three groups. The first phase aimed to measure bladder leak point pressure during the activation of the device. The second and third phases aimed to assess tissue response to the presence of the device after 2-9 weeks and after 3 months respectively. Histopathological and immunohistochemistry evaluation of the urethra was performed. RESULTS: Bladder leak point pressure was measured at levels between 1091 ± 30.6 cmH2 O and 1244.1 ± 99 cmH2 O (mean ± standard deviation) depending on the number of cuffs used. At gross examination, the explanted urethra showed no sign of infection, atrophy or stricture. On microscopic examination no significant difference in structure was found between urethral structure surrounded by a cuff and control urethra. In the peripheral tissues, the implanted material elicited a chronic foreign body reaction. Apart from one case, specimens did not show significant presence of lymphocytes, polymorphonuclear leucocytes, necrosis or cell degeneration. Immunohistochemistry confirmed the absence of macrophages in the samples. CONCLUSIONS: This animal study shows that the emAUS can provide continence. This new electronic controlled sequential alternating compression mechanism can avoid damage to urethral vascularity, at least up to 3 months after implantation. After this positive proof of concept, long-term studies are needed before clinical application could be considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Although it is well recognized that the diagnosis of hypertension should be based on blood pressure (BP) measurements taken on several occasions, notably to account for a transient elevation of BP on the first readings, the prevalence of hypertension in populations has often relied on measurements at a single visit. OBJECTIVE: To identify an efficient strategy for assessing reliably the prevalence of hypertension in the population with regards to the number of BP readings required. DESIGN: Population-based survey of BP and follow-up information. SETTING AND PARTICIPANTS: All residents aged 25-64 years in an area of Dar es Salaam (Tanzania). MAIN OUTCOME MEASURES: Three BP readings at four successive visits in all participants with high BP (n = 653) and in 662 participants without high BP, measured with an automated BP device.RESULTS BP decreased substantially from the first to third readings at each of the four visits. BP decreased substantially between the first two visits but only a little between the next visits. Consequently, the prevalence of high BP based on the third reading--or the average of the second and third readings--at the second visit was not largely different compared to estimates based on readings at the fourth visit. BP decreased similarly when the first three visits were separated by 3-day or 14-day intervals. CONCLUSIONS: Taking triplicate readings on two visits, possibly separated by just a few days, could be a minimal strategy for assessing adequately the mean BP and the prevalence of hypertension at the population level. A sound strategy is important for assessing reliably the burden of hypertension in populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparison of cancer prevalence with cancer mortality can lead under some hypotheses to an estimate of registration rate. A method is proposed, where the cases with cancer as a cause of death are divided into 3 categories: (1) cases already known by the registry (2) unknown cases having occured before the registry creation date (3) unknown cases occuring during the registry operates. The estimate is then the number of cases in the first category divided by the total of those in categories 1 and 3 (these only are to be registered). An application is performed on the data of the Canton de Vaud. Survival rates of the Norvegian Cancer Registry are used for computing the number of unknown cases to be included in second and third category, respectively. The discussion focusses on the possible determinants of the obtained comprehensiveness rates for various cancer sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Manival near Grenoble (French Prealps) is a very active debris-flow torrent equipped with a large sediment trap (25 000 m3) protecting an urbanized alluvial fan from debris-flows. We began monitoring the sediment budget of the catchment controlled by the trap in Spring 2009. Terrestrial laser scanner is used for monitoring topographic changes in a small gully, the main channel, and the sediment trap. In the main channel, 39 cross-sections are surveyed after every event. Three periods of intense geomorphic activity are documented here. The first was induced by a convective storm in August 2009 which triggered a debris-flow that deposited ~1,800 m3 of sediment in the trap. The debris-flow originated in the upper reach of the main channel and our observations showed that sediment outputs were entirely supplied by channel scouring. Hillslope debris-flows were initiated on talus slopes, as revealed by terrestrial LiDAR resurveys; however they were disconnected to the main channel. The second and third periods of geomorphic activity were induced by long duration and low intensity rainfall events in September and October 2009 which generate small flow events with intense bedload transport. These events contribute to recharge the debris-flow channel with sediments by depositing important gravel dunes propagating from headwaters. The total recharge in the torrent subsequent to bedload transport events was estimated at 34% of the sediment erosion induced by the August debris-flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: An important component of the policy to deal with the H1N1 pandemic in 2009 was to develop and implement vaccination. Since pregnant women were found to be at particular risk of severe morbidity and mortality, the World Health Organization and the European Centers for Disease Control advised vaccinating pregnant women, regardless of trimester of pregnancy. This study reports a survey of vaccination policies for pregnant women in European countries. METHODS: Questionnaires were sent to European competent authorities of 27 countries via the European Medicines Agency and to leaders of registries of European Surveillance of Congenital Anomalies in 21 countries. RESULTS: Replies were received for 24 out of 32 European countries of which 20 had an official pandemic vaccination policy. These 20 countries all had a policy targeting pregnant women. For two of the four countries without official pandemic vaccination policies, some vaccination of pregnant women took place. In 12 out of 20 countries the policy was to vaccinate only second and third trimester pregnant women and in 8 out of 20 countries the policy was to vaccinate pregnant women regardless of trimester of pregnancy. Seven different vaccines were used for pregnant women, of which four contained adjuvants. Few countries had mechanisms to monitor the number of vaccinations given specifically to pregnant women over time. Vaccination uptake varied. CONCLUSIONS: Differences in pandemic vaccination policy and practice might relate to variation in perception of vaccine efficacy and safety, operational issues related to vaccine manufacturing and procurement, and vaccination campaign systems. Increased monitoring of pandemic influenza vaccine coverage of pregnant women is recommended to enable evaluation of the vaccine safety in pregnancy and pandemic vaccination campaign effectiveness.