103 resultados para Gradient-based approaches


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE OF REVIEW: Invasive candidiasis is a severe infectious complication occurring mostly in onco-hematologic and surgical patients. Its conventional diagnosis is insensitive and often late, leading to a delayed treatment and a high mortality. The purpose of this article is to review recent contributions in the nonconventional diagnostic approaches of invasive candidiasis, both for the detection of the epidose and the characterization of the etiologic agent. RECENT FINDINGS: Antigen-based tests to detect invasive candidiasis comprise a specific test, mannan, as well as a nonspecific test, beta-D-glucan. Both have a moderate sensitivity and a high specificity, and cannot be recommended alone as a negative screening tool or a positive syndrome driven diagnostic tool. Molecular-based tests still have not reached the stage of rapid, easy to use, standardized tests ideally complementing blood culture at the time of blood sampling. New tests (fluorescence in-situ hybridization or mass spectrometry) significantly reduce the delay of identification of Candida at the species level in positive blood cultures, and should have a positive impact on earlier appropriate antifungal therapy and possibly on outcome. SUMMARY: Both antigen-based and molecular tests appear as promising new tools to complement and accelerate the conventional diagnosis of invasive candidiasis with an expected significant impact on earlier and more focused treatment and on prognosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary Forests are key ecosystems of the earth and associated with a large range of functions. Many of these functions are beneficial to humans and are referred to as ecosystem services. Sustainable development requires that all relevant ecosystem services are quantified, managed and monitored equally. Natural resource management therefore targets the services associated with ecosystems. The main hypothesis of this thesis is that the spatial and temporal domains of relevant services do not correspond to a discrete forest ecosystem. As a consequence, the services are not quantified, managed and monitored in an equal and sustainable manner. The thesis aims were therefore to test this hypothesis, establish an improved conceptual approach and provide spatial applications for the relevant land cover and structure variables. The study was carried out in western Switzerland and based primarily on data from a countrywide landscape inventory. This inventory is part of the third Swiss national forest inventory and assesses continuous landscape variables based on a regular sampling of true colour aerial imagery. In addition, land cover variables were derived from Landsat 5 TM passive sensor data and land structure variables from active sensor data from a small footprint laserscanning system. The results confirmed the main hypothesis, as relevant services did not scale well with the forest ecosystem. Instead, a new conceptual approach for sustainable management of natural resources was described. This concept quantifies the services as a continuous function of the landscape, rather than a discrete function of the forest ecosystem. The explanatory landscape variables are therefore called continuous fields and the forest becomes a dependent and function-driven management unit. Continuous field mapping methods were established for land cover and structure variables. In conclusion, the discrete forest ecosystem is an adequate planning and management unit. However, monitoring the state of and trends in sustainability of services requires them to be quantified as a continuous function of the landscape. Sustainable natural resource management iteratively combines the ecosystem and gradient approaches. Résumé Les forêts sont des écosystèmes-clés de la terre et on leur attribue un grand nombre de fonctions. Beaucoup de ces fonctions sont bénéfiques pour l'homme et sont nommées services écosystémiques. Le développement durable exige que ces services écosystémiques soient tous quantifiés, gérés et surveillés de façon égale. La gestion des ressources naturelles a donc pour cible les services attribués aux écosystèmes. L'hypothèse principale de cette thèse est que les domaines spatiaux et temporels des services attribués à la forêt ne correspondent pas à un écosystème discret. Par conséquent, les services ne sont pas quantifiés, aménagés et surveillés d'une manière équivalente et durable. Les buts de la thèse étaient de tester cette hypothèse, d'établir une nouvelle approche conceptuelle de la gestion des ressources naturelles et de préparer des applications spatiales pour les variables paysagères et structurelles appropriées. L'étude a été menée en Suisse occidentale principalement sur la base d'un inventaire de paysage à l'échelon national. Cet inventaire fait partie du troisième inventaire forestier national suisse et mesure de façon continue des variables paysagères sur la base d'un échantillonnage régulier sur des photos aériennes couleur. En outre, des variables de couverture ? terrestre ont été dérivées des données d'un senseur passif Landsat 5 TM, ainsi que des variables structurelles, dérivées du laserscanning, un senseur actif. Les résultats confirment l'hypothèse principale, car l'échelle des services ne correspond pas à celle de l'écosystème forestier. Au lieu de cela, une nouvelle approche a été élaborée pour la gestion durable des ressources naturelles. Ce concept représente les services comme une fonction continue du paysage, plutôt qu'une fonction discrète de l'écosystème forestier. En conséquence, les variables explicatives de paysage sont dénommées continuous fields et la forêt devient une entité dépendante, définie par la fonction principale du paysage. Des méthodes correspondantes pour la couverture terrestre et la structure ont été élaborées. En conclusion, l'écosystème forestier discret est une unité adéquate pour la planification et la gestion. En revanche, la surveillance de la durabilité de l'état et de son évolution exige que les services soient quantifiés comme fonction continue du paysage. La gestion durable des ressources naturelles joint donc l'approche écosystémique avec celle du gradient de manière itérative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MRI visualization of devices is traditionally based on signal loss due to T(2)* effects originating from local susceptibility differences. To visualize nitinol devices with positive contrast, a recently introduced postprocessing method is adapted to map the induced susceptibility gradients. This method operates on regular gradient-echo MR images and maps the shift in k-space in a (small) neighborhood of every voxel by Fourier analysis followed by a center-of-mass calculation. The quantitative map of the local shifts generates the positive contrast image of the devices, while areas without susceptibility gradients render a background with noise only. The positive signal response of this method depends only on the choice of the voxel neighborhood size. The properties of the method are explained and the visualizations of a nitinol wire and two stents are shown for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development and Phase 3 testing of the most advanced malaria vaccine, RTS,S/AS01, indicates that malaria vaccine R&D is moving into a new phase. Field trials of several research malaria vaccines have also confirmed that it is possible to impact the host-parasite relationship through vaccine-induced immune responses to multiple antigenic targets using different platforms. Other approaches have been appropriately tested but turned out to be disappointing after clinical evaluation. As the malaria community considers the potential role of a first-generation malaria vaccine in malaria control efforts, it is an apposite time to carefully document terminated and ongoing malaria vaccine research projects so that lessons learned can be applied to increase the chances of success for second-generation malaria vaccines over the next 10 years. The most comprehensive resource of malaria vaccine projects is a spreadsheet compiled by WHO thanks to the input from funding agencies, sponsors and investigators worldwide. This spreadsheet, available from WHO's website, is known as "the rainbow table". By summarizing the published and some unpublished information available for each project on the rainbow table, the most comprehensive review of malaria vaccine projects to be published in the last several years is provided below.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene therapy approaches using recombinant adeno-associated virus serotype 2 (rAAV2) and serotype 8 (rAAV8) have achieved significant clinical benefits. The generation of rAAV Reference Standard Materials (RSM) is key to providing points of reference for particle titer, vector genome titer, and infectious titer for gene transfer vectors. Following the example of the rAAV2RSM, here we have generated and characterized a novel RSM based on rAAV serotype 8. The rAAV8RSM was produced using transient transfection, and the purification was based on density gradient ultracentrifugation. The rAAV8RSM was distributed for characterization along with standard assay protocols to 16 laboratories worldwide. Mean titers and 95% confidence intervals were determined for capsid particles (mean, 5.50×10(11) pt/ml; CI, 4.26×10(11) to 6.75×10(11) pt/ml), vector genomes (mean, 5.75×10(11) vg/ml; CI, 3.05×10(11) to 1.09×10(12) vg/ml), and infectious units (mean, 1.26×10(9) IU/ml; CI, 6.46×10(8) to 2.51×10(9) IU/ml). Notably, there was a significant degree of variation between institutions for each assay despite the relatively tight correlation of assay results within an institution. This outcome emphasizes the need to use RSMs to calibrate the titers of rAAV vectors in preclinical and clinical studies at a time when the field is maturing rapidly. The rAAV8RSM has been deposited at the American Type Culture Collection (VR-1816) and is available to the scientific community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractCancer treatment has shifted from cytotoxic and nonspecific chemotherapy to chronic treatment with targeted molecular therapies. These new classes of drugs directed against cancer-specific molecules and signaling pathways, act at a particular level of the tumor cell development. However, in both types of therapeutic approaches (standard cytotoxic chemotherapy and targeted signal transduction inhibitions), toxicity and side effects can occur. The aim of this thesis was to investigate various approaches to improve the activity and tolerability of cancer treatment, in a clinical setting, a) by molecular targeting through the use of tyrosine kinase inhibitors (TKIs), whose dosage can be adapted to each patient according to plasma levels, and, b) in a preclinical model, by tissue targeting with locoregional administration of cytotoxic chemotherapy to increase drug exposure in the target tissue while reducing systemic toxicity of the treatment.A comprehensive program for the Therapeutic Drug Monitoring (TDM) of the new class of targeted anticancer drugs of TKIs in patient's blood has been therefore initiated comprising the setting up, validation and clinical application of a multiplex assay by liquid chromatography coupled to tandem mass spectrometry of TKIs in plasma from cancer patients. Information on drugs exposure may be clinically useful for an optimal follow-up of patients' anticancer treatment, especially in case of less than optimal clinical response, occurrence of adverse drug reaction effects and the numerous risks of drug-drug interactions. In this context, better knowledge of the potential drug interactions between TKIs and widely prescribed co- medications is of critical importance for clinicians, to improve their daily care of cancer patients. For one of the first TKI imatinib, TDM interpretation is nowadays based on total plasma concentrations but, only the unbound (free) form is likely to enter cell to exert its pharmacological action. Pharmacokinetic analysis of the total and free plasma level of imatinib measured simultaneously in patients have allowed to refine and validate a population pharmacokinetic model integrating factors influencing in patients the exposure of pharmacological active species. The equation developed from this model may be used for extrapolating free imatinib plasma concentration based on the total plasma levels that are currently measured in TDM from patients. Finally, the specific influence of Pglycoprotein on the intracellular disposition of TKIs has been studies in cell systems using the siRNA silencing approach.Another approach to enhance the selectivity of anticancer treatment may be achieved by the loco-regional administration of a cytostatic agent to the target organ while sparing non- affected tissues. Isolated lung perfusion (ILP) was designed for the treatment of loco-regional malignancies of the lung but clinical results have been so far disappointing. It has been shown in a preclinical model in rats that ILP with the cytotoxic agent doxorubicin alone allows a high drug uptake in lung tissue, and a low systemic toxicity, but was characterized by a high spatial tissular heterogeneity in drug exposure and doxorubicin uptake in tumor was comparatively smaller than in normal lung tissue. Photodynamic therapy (PDT) is a new approach for the treatment of superficial tumors, and implies the application of a sensitizer activated by a laser light at a specific wavelength, that disrupts endothelial barrier of tumor vessels to increase locally the distribution of cytostatics into the tumor tissue. PDT pre-treatment before intravenous administration of liposomal doxorubicin was indeed shown to selectively increase drug uptake in tumors in a rat model of sarcoma tumors to the lung.RésuméLe traitement de certains cancers s'est progressivement transformé et est passé de la chimiothérapie, cytotoxique et non spécifique, au traitement chronique des patients avec des thérapies moléculaires ciblées. Ces médicaments ont une action ciblée en interférant à un niveau spécifique du développement de la cellule tumorale. Dans les deux types d'approches thérapeutiques (chimiothérapie cytotoxique et traitements ciblés), on est confronté à la présence de toxicité et aux effets secondaires du traitement anticancéreux. Le but de cette thèse a donc été d'étudier diverses approches visant à améliorer l'efficacité et la tolérabilité du traitement anticancéreux, a) dans le cadre d'une recherche clinique, par le ciblage moléculaire grâce aux inhibiteurs de tyrosines kinases (TKIs) dont la posologie est adaptée à chaque patient, et b) dans un modèle préclinique, par le ciblage tissulaire grâce à l'administration locorégionale de chimiothérapie cytotoxique, afin d'augmenter l'exposition dans le tissu cible et de réduire la toxicité systémique du traitement.Un programme de recherche sur le suivi thérapeutique (Therapeutic Drug Monitoring, TDM) des inhibiteurs de tyrosine kinases a été ainsi mis en place et a impliqué le développement, la validation et l'application clinique d'une méthode multiplex par chromatographie liquide couplée à la spectrométrie de masse en tandem des TKIs chez les patients souffrant de cancer. L'information fournie par le TDM sur l'exposition des patients aux traitements ciblés est cliniquement utile et est susceptible d'optimiser la dose administrée, notamment dans les cas où la réponse clinique au traitement des patients est sous-optimale, en présence d'effets secondaires du traitement ciblé, ou lorsque des risques d'interactions médicamenteuses sont suspectés. Dans ce contexte, l'étude des interactions entre les TKIs et les co-médications couramment associées est utile pour les cliniciens en charge d'améliorer au jour le jour la prise en charge du traitement anticancéreux. Pour le premier TKI imatinib, l'interprétation TDM est actuellement basée sur la mesure des concentrations plasmatiques totales alors que seule la fraction libre (médicament non lié aux protéines plasmatiques circulantes) est susceptible de pénétrer dans la cellule pour exercer son action pharmacologique. L'analyse pharmacocinétique des taux plasmatiques totaux et libres d'imatinib mesurés simultanément chez les patients a permis d'affiner et de valider un modèle de pharmacocinétique de population qui intègre les facteurs influençant l'exposition à la fraction de médicament pharmacologiquement active. L'équation développée à partir de ce modèle permet d'extrapoler les concentrations libres d'imatinib à partir des concentrations plasmatiques totales qui sont actuellement mesurées lors du TDM des patients. Finalement, l'influence de la P-glycoprotéine sur la disposition cellulaire des TKIs a été étudiée dans un modèle cellulaire utilisant l'approche par la technologie du siRNA permettant de bloquer sélectivement l'expression du gène de cette protéine d'efflux des médicaments.Une autre approche pour augmenter la sélectivité du traitement anticancéreux consiste en une administration loco-régionale d'un agent cytostatique directement au sein de l'organe cible tout en préservant les tissus sains. La perfusion isolée du poumon (ILP) a été conçue pour le traitement loco-régional des cancers affectant les tissus pulmonaires mais les résultats cliniques ont été jusqu'à ce jour décevants. Dans des modèles précliniques chez le rat, il a pu être démontré que l'ILP avec la doxorubicine, un agent cytotoxique, administré seul, permet une exposition élevée au niveau du tissu pulmonaire, et une faible toxicité systémique. Toutefois, cette technique est caractérisée par une importante variabilité de la distribution dans les tissus pulmonaires et une pénétration du médicament au sein de la tumeur comparativement plus faible que dans les tissus sains.La thérapie photodynamique (PDT) est une nouvelle approche pour le traitement des tumeurs superficielles, qui consiste en l'application d'un agent sensibilisateur activé par une lumière laser de longueur d'onde spécifique, qui perturbe l'intégrité physiologique de la barrière endothéliale des vaisseaux alimentant la tumeur et permet d'augmenter localement la pénétration des agents cytostatiques.Nos études ont montré qu'un pré-traitement par PDT permet d'augmenter sélectivement l'absorption de doxorubicine dans les tumeurs lors d'administration i.v. de doxorubicine liposomale dans un modèle de sarcome de poumons de rongeurs.Résumé large publicDepuis une dizaine d'année, le traitement de certains cancers s'est progressivement transformé et les patients qui devaient jusqu'alors subir des chimiothérapies, toxiques et non spécifiques, peuvent maintenant bénéficier de traitements chroniques avec des thérapies ciblées. Avec les deux types d'approches thérapeutiques, on reste cependant confronté à la toxicité et aux effets secondaires du traitement.Le but de cette thèse a été d'étudier chez les patients et dans des modèles précliniques les diverses approches visant à améliorer l'activité et la tolérance des traitements à travers un meilleur ciblage de la thérapie anticancéreuse. Cet effort de recherche nous a conduits à nous intéresser à l'optimisation du traitement par les inhibiteurs de tyrosines kinases (TKIs), une nouvelle génération d'agents anticancéreux ciblés agissant sélectivement sur les cellules tumorales, en particulier chez les patients souffrant de leucémie myéloïde chronique et de tumeurs stromales gastro-intestinales. L'activité clinique ainsi que la toxicité de ces TKIs paraissent dépendre non pas de la dose de médicament administrée, mais de la quantité de médicaments circulant dans le sang auxquelles les tumeurs cancéreuses sont exposées et qui varient beaucoup d'un patient à l'autre. A cet effet, nous avons développé une méthode par chromatographie couplée à la spectrométrie de masse pour mesurer chez les patients les taux de médicaments de la classe des TKIs dans la perspective de piloter le traitement par une approche de suivi thérapeutique (Therapeutic Drug Monitoring, TDM). Le TDM repose sur la mesure de la quantité de médicament dans le sang d'un patient dans le but d'adapter individuellement la posologie la plus appropriée: des quantités insuffisantes de médicament dans le sang peuvent conduire à un échec thérapeutique alors qu'un taux sanguin excessif peut entraîner des manifestations toxiques.Dans une seconde partie préclinique, nous nous sommes concentrés sur l'optimisation de la chimiothérapie loco-régionale dans un modèle de sarcome du poumon chez le rat, afin d'augmenter l'exposition dans la tumeur tout en réduisant la toxicité dans les tissus non affectés.La perfusion isolée du poumon (ILP) permet d'administrer un médicament anticancéreux cytotoxique comme la doxorubicine, sélectivement au niveau le tissu pulmonaire où sont généralement localisées les métastases de sarcome. L'administration par ILP de doxorubicine, toxique pour le coeur, a permis une forte accumulation des médicaments dans le poumon, tout en épargnant le coeur. Il a été malheureusement constaté que la doxorubicine ne pénètre que faiblement dans la tumeur sarcomateuse, témoignant des réponses cliniques décevantes observées avec cette approche en clinique. Nous avons ainsi étudié l'impact sur la pénétration tumorale de l'association d'une chimiothérapie cytotoxique avec la thérapie photodynamique (PDT) qui consiste en l'irradiation spécifique du tissu-cible cancéreux, après l'administration d'un agent photosensibilisateur. Dans ce modèle animal, nous avons observé qu'un traitement par PDT permet effectivement d'augmenter de façon sélective l'accumulation de doxorubicine dans les tumeurs lors d'administration intraveineuse de médicament.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Pharmacy-based case mix measures are an alternative source of information to the relatively scarce outpatient diagnoses data. But most published tools use national drug nomenclatures and offer no head-to-head comparisons between drugs-related and diagnoses-based categories. The objective of the study was to test the accuracy of drugs-based morbidity groups derived from the World Health Organization Anatomical Therapeutic Chemical Classification of drugs by checking them against diagnoses-based groups. METHODS: We compared drugs-based categories with their diagnoses-based analogues using anonymous data on 108,915 individuals insured with one of four companies. They were followed throughout 2005 and 2006 and hospitalized at least once during this period. The agreement between the two approaches was measured by weighted kappa coefficients. The reproducibility of the drugs-based morbidity measure over the 2 years was assessed for all enrollees. RESULTS: Eighty percent used a drug associated with at least one of the 60 morbidity categories derived from drugs dispensation. After accounting for inpatient under-coding, fifteen conditions agreed sufficiently with their diagnoses-based counterparts to be considered alternative strategies to diagnoses. In addition, they exhibited good reproducibility and allowed prevalence estimates in accordance with national estimates. For 22 conditions, drugs-based information identified accurately a subset of the population defined by diagnoses. CONCLUSIONS: Most categories provide insurers with health status information that could be exploited for healthcare expenditure prediction or ambulatory cost control, especially when ambulatory diagnoses are not available. However, due to insufficient concordance with their diagnoses-based analogues, their use for morbidity indicators is limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background:¦Hirschsprung's disease (HSCR) is a congenital malformation of the enteric nervous system due to the¦arrest of migration of neural crest cells to form the myenteric and submucosal plexuses. It leads to an anganglionic intestinal segment, which is permanently contracted causing intestinal obstruction. Its incidence is approximately 1/5000 birth, and males are more frequently affected with a male/female ratio of 4/1. The diagnosis is in most cases made within the first year of life. The rectal biopsy of the mucosa and sub-mucosa is the diagnostic gold standard.¦Purpose:¦The aim of this study was to compare two surgical approaches for HSCR, the Duhamel technique and the transanal endorectal pull-through (TEPT) in term of indications, duration of surgery, duration of hospital stay, postoperative treatment, complications, frequency of enterocolitis and functional outcomes.¦Methods:¦Fifty-nine patients were treated for HSCR by one of the two methods in our department of pediatric¦surgery between 1994 and 2010. These patients were separated into two groups (I: Duhamel, II: TEPT), which were compared on the basis of medical records. Statistics were made to compare the two groups (ANOVA test). The first group includes 43 patients and the second 16 patients. It is noteworthy that twenty-four patients (about 41% of all¦patients) were referred from abroad (Western Africa). Continence was evaluated with the Krickenbeck's score.¦Results:¦Statistically, this study showed that operation duration, hospital stay, postoperative fasting and duration of postoperative antibiotics were significantly shorter (p value < 0.05) in group II (TEPT). But age at operation and length of aganglionic segment showed no significant difference between the two groups. The continence follow-up showed generally good results (Krickenbeck's scores 1; 2.1; 3.1) in both groups with a slight tendency to constipation in group I and soiling in group II.¦Conclusion:¦We found two indications for the Duhamel method that are being referred from a country without¦careful postoperative surveillance and/or having a previous colostomy. Even if the Duhamel technique tends to be replaced by the TEPT, it remains the best operative approach for some selected patients. TEPT has also proved some advantages but must be followed carefully because, among other points, of the postoperative dilatations. Our postoperative standards, like digital rectal examination and anal dilatations seem to reduce the occurrence of complications like rectal spur and anal/anastomosis stenosis, respectively in the Duhamel method and the TEPT technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim We investigated the late Quaternary history of two closely related and partly sympatric species of Primula from the south-western European Alps, P. latifolia Lapeyr. and P. marginata Curtis, by combining phylogeographical and palaeodistribution modelling approaches. In particular, we were interested in whether the two approaches were congruent and identified the same glacial refugia. Location South-western European Alps. Methods For the phylogeographical analysis we included 353 individuals from 28 populations of P. marginata and 172 individuals from 15 populations of P. latifolia and used amplified fragment length polymorphisms (AFLPs). For palaeodistribution modelling, species distribution models (SDMs) were based on extant species occurrences and then projected to climate models (CCSM, MIROC) of the Last Glacial Maximum (LGM), approximately 21 ka. Results The locations of the modelled LGM refugia were confirmed by various indices of genetic variation. The refugia of the two species were largely geographically isolated, overlapping only 6% to 11% of the species' total LGM distribution. This overlap decreased when the position of the glacial ice sheet and the differential elevational and edaphic distributions of the two species were considered. Main conclusions The combination of phylogeography and palaeodistribution modelling proved useful in locating putative glacial refugia of two alpine species of Primula. The phylogeographical data allowed us to identify those parts of the modelled LGM refugial area that were likely source areas for recolonization. The use of SDMs predicted LGM refugial areas substantially larger and geographically more divergent than could have been predicted by phylogeographical data alone

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To investigate magnetization transfer (MT) effects as a new source of contrast for imaging and tracking of peripheral foot nerves. MATERIALS AND METHODS: Two sets of 3D spoiled gradient-echo images acquired with and without a saturation pulse were used to generate MT ratio (MTR) maps of 260 μm in-plane resolution for eight volunteers at 3T. Scan parameters were adjusted to minimize signal loss due to T2 dephasing, and a dedicated coil was used to improve the inherently low signal-to-noise ratio of small voxels. Resulting MTR values in foot nerves were compared with those in surrounding muscle tissue. RESULTS: Average MTR values for muscle (45.5 ± 1.4%) and nerve (21.4 ± 3.1%) were significantly different (P < 0.0001). In general, the difference in MTR values was sufficiently large to allow for intensity-based segmentation and tracking of foot nerves in individual subjects. This procedure was termed MT-based 3D visualization. CONCLUSION: The MTR serves as a new source of contrast for imaging of peripheral foot nerves and provides a means for high spatial resolution tracking of these structures. The proposed methodology is directly applicable on standard clinical MR scanners and could be applied to systemic pathologies, such as diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.