984 resultados para scenario analysis
Resumo:
Changes in marine net primary productivity (PP) and export of particulate organic carbon (EP) are projected over the 21st century with four global coupled carbon cycle-climate models. These include representations of marine ecosystems and the carbon cycle of different structure and complexity. All four models show a decrease in global mean PP and EP between 2 and 20% by 2100 relative to preindustrial conditions, for the SRES A2 emission scenario. Two different regimes for productivity changes are consistently identified in all models. The first chain of mechanisms is dominant in the low- and mid-latitude ocean and in the North Atlantic: reduced input of macro-nutrients into the euphotic zone related to enhanced stratification, reduced mixed layer depth, and slowed circulation causes a decrease in macro-nutrient concentrations and in PP and EP. The second regime is projected for parts of the Southern Ocean: an alleviation of light and/or temperature limitation leads to an increase in PP and EP as productivity is fueled by a sustained nutrient input. A region of disagreement among the models is the Arctic, where three models project an increase in PP while one model projects a decrease. Projected changes in seasonal and interannual variability are modest in most regions. Regional model skill metrics are proposed to generate multi-model mean fields that show an improved skill in representing observation-based estimates compared to a simple multi-model average. Model results are compared to recent productivity projections with three different algorithms, usually applied to infer net primary production from satellite observations.
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
This thesis is composed of three life-cycle analysis (LCA) studies of manufacturing to determine cumulative energy demand (CED) and greenhouse gas emissions (GHG). The methods proposed could reduce the environmental impact by reducing the CED in three manufacturing processes. First, industrial symbiosis is proposed and a LCA is performed on both conventional 1 GW-scaled hydrogenated amorphous silicon (a-Si:H)-based single junction and a-Si:H/microcrystalline-Si:H tandem cell solar PV manufacturing plants and such plants coupled to silane recycling plants. Using a recycling process that results in a silane loss of only 17 versus 85 percent, this results in a CED savings of 81,700 GJ and 290,000 GJ per year for single and tandem junction plants, respectively. This recycling process reduces the cost of raw silane by 68 percent, or approximately $22.6 and $79 million per year for a single and tandem 1 GW PV production facility, respectively. The results show environmental benefits of silane recycling centered around a-Si:H-based PV manufacturing plants. Second, an open-source self-replicating rapid prototype or 3-D printer, the RepRap, has the potential to reduce the environmental impact of manufacturing of polymer-based products, using distributed manufacturing paradigm, which is further minimized by the use of PV and improvements in PV manufacturing. Using 3-D printers for manufacturing provides the ability to ultra-customize products and to change fill composition, which increases material efficiency. An LCA was performed on three polymer-based products to determine the CED and GHG from conventional large-scale production and are compared to experimental measurements on a RepRap producing identical products with ABS and PLA. The results of this LCA study indicate that the CED of manufacturing polymer products can possibly be reduced using distributed manufacturing with existing 3-D printers under 89% fill and reduced even further with a solar photovoltaic system. The results indicate that the ability of RepRaps to vary fill has the potential to diminish environmental impact on many products. Third, one additional way to improve the environmental performance of this distributed manufacturing system is to create the polymer filament feedstock for 3-D printers using post-consumer plastic bottles. An LCA was performed on the recycling of high density polyethylene (HDPE) using the RecycleBot. The results of the LCA showed that distributed recycling has a lower CED than the best-case scenario used for centralized recycling. If this process is applied to the HDPE currently recycled in the U.S., more than 100 million MJ of energy could be conserved per annum along with significant reductions in GHG. This presents a novel path to a future of distributed manufacturing suited for both the developed and developing world with reduced environmental impact. From improving manufacturing in the photovoltaic industry with the use of recycling to recycling and manufacturing plastic products within our own homes, each step reduces the impact on the environment. The three coupled projects presented here show a clear potential to reduce the environmental impact of manufacturing and other processes by implementing complimenting systems, which have environmental benefits of their own in order to achieve a compounding effect of reduced CED and GHG.
Resumo:
Background: WHO's 2013 revisions to its Consolidated Guidelines on antiretroviral drugs recommend routine viral load monitoring, rather than clinical or immunological monitoring, as the preferred monitoring approach on the basis of clinical evidence. However, HIV programmes in resource-limited settings require guidance on the most cost-effective use of resources in view of other competing priorities such as expansion of antiretroviral therapy coverage. We assessed the cost-effectiveness of alternative patient monitoring strategies. Methods: We evaluated a range of monitoring strategies, including clinical, CD4 cell count, and viral load monitoring, alone and together, at different frequencies and with different criteria for switching to second-line therapies. We used three independently constructed and validated models simultaneously. We estimated costs on the basis of resource use projected in the models and associated unit costs; we quantified impact as disability-adjusted life years (DALYs) averted. We compared alternatives using incremental cost-effectiveness analysis. Findings: All models show that clinical monitoring delivers significant benefit compared with a hypothetical baseline scenario with no monitoring or switching. Regular CD4 cell count monitoring confers a benefit over clinical monitoring alone, at an incremental cost that makes it affordable in more settings than viral load monitoring, which is currently more expensive. Viral load monitoring without CD4 cell count every 6—12 months provides the greatest reductions in morbidity and mortality, but incurs a high cost per DALY averted, resulting in lost opportunities to generate health gains if implemented instead of increasing antiretroviral therapy coverage or expanding antiretroviral therapy eligibility. Interpretation: The priority for HIV programmes should be to expand antiretroviral therapy coverage, firstly at CD4 cell count lower than 350 cells per μL, and then at a CD4 cell count lower than 500 cells per μL, using lower-cost clinical or CD4 monitoring. At current costs, viral load monitoring should be considered only after high antiretroviral therapy coverage has been achieved. Point-of-care technologies and other factors reducing costs might make viral load monitoring more affordable in future. Funding: Bill & Melinda Gates Foundation, WHO.
Resumo:
Competing water demands for household consumption as well as the production of food, energy, and other uses pose challenges for water supply and sustainable development in many parts of the world. Designing creative strategies and learning processes for sustainable water governance is thus of prime importance. While this need is uncontested, suitable approaches still have to be found. In this article we present and evaluate a conceptual approach to scenario building aimed at transdisciplinary learning for sustainable water governance. The approach combines normative, explorative, and participatory scenario elements. This combination allows for adequate consideration of stakeholders’ and scientists’ systems, target, and transformation knowledge. Application of the approach in the MontanAqua project in the Swiss Alps confirmed its high potential for co-producing new knowledge and establishing a meaningful and deliberative dialogue between all actors involved. The iterative and combined approach ensured that stakeholders’ knowledge was adequately captured, fed into scientific analysis, and brought back to stakeholders in several cycles, thereby facilitating learning and co-production of new knowledge relevant for both stakeholders and scientists. However, the approach also revealed a number of constraints, including the enormous flexibility required of stakeholders and scientists in order for them to truly engage in the co-production of new knowledge. Overall, the study showed that shifts from strategic to communicative action are possible in an environment of mutual trust. This ultimately depends on creating conditions of interaction that place scientists’ and stakeholders’ knowledge on an equal footing.
Resumo:
BACKGROUND The reported survival of implants depends on the definition used for the endpoint, usually revision. When screening through registry reports from different countries, it appears that revision is defined quite differently. QUESTIONS/PURPOSES The purposes of this study were to compare the definitions of revision among registry reports and to apply common clinical scenarios to these definitions. METHODS We downloaded or requested reports of all available national joint registries. Of the 23 registries we identified, 13 had published reports that were available in English and were beyond the pilot phase. We searched these registries' reports for the definitions of the endpoint, mostly revision. We then applied the following scenarios to the definition of revision and analyzed if those scenarios were regarded as a revision: (A) wound revision without any addition or removal of implant components (such as hematoma evacuation); (B) exchange of head and/or liner (like for infection); (C) isolated secondary patella resurfacing; and (D) secondary patella resurfacing with a routine liner exchange. RESULTS All registries looked separately at the characteristic of primary implantation without a revision and 11 of 13 registers reported on the characteristics of revisions. Regarding the definition of revision, there were considerable differences across the reports. In 11 of 13 reports, the primary outcome was revision of the implant. In one registry the primary endpoint was "reintervention/revision" while another registry reported separately on "failure" and "reoperations". In three registries, the definition of the outcome was not provided, however in one report a results list gave an indication for the definition of the outcome. Wound revision without any addition or removal of implant components (scenario A) was considered a revision in three of nine reports that provided a clear definition on this question, whereas two others did not provide enough information to allow this determination. Exchange of the head and/or liner (like for infection; scenario B) was considered a revision in 11 of 11; isolated secondary patella resurfacing (scenario C) in six of eight; and secondary patella resurfacing with routine liner exchange (scenario D) was considered a revision in nine of nine reports. CONCLUSIONS Revision, which is the most common main endpoint used by arthroplasty registries, is not universally defined. This implies that some reoperations that are considered a revision in one registry are not considered a revision in another registry. Therefore, comparisons of implant performance using data from different registries have to be performed with caution. We suggest that registries work to harmonize their definitions of revision to help facilitate comparisons of results across the world's arthroplasty registries.
Resumo:
Pairwise meta-analysis is an established statistical tool for synthesizing evidence from multiple trials, but it is informative only about the relative efficacy of two specific interventions. The usefulness of pairwise meta-analysis is thus limited in real-life medical practice, where many competing interventions may be available for a certain condition and studies informing some of the pairwise comparisons may be lacking. This commonly encountered scenario has led to the development of network meta-analysis (NMA). In the last decade, several applications, methodological developments, and empirical studies in NMA have been published, and the area is thriving as its relevance to public health is increasingly recognized. This article presents a review of the relevant literature on NMA methodology aiming to pinpoint the developments that have appeared in the field. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
The Greenland ice sheet is accepted as a key factor controlling the Quaternary glacial scenario. However, the origin and mechanisms of major Arctic glaciation starting at 3.15 Ma and culminating at 2.74 Ma are still controversial. For this phase of intense cooling Ravelo et al. proposed a complex gradual forcing mechanism. In contrast, our new submillennial-scale paleoceanographic records from the Pliocene North Atlantic suggest a far more precise timing and forcing for the initiation of northern hemisphere glaciation (NHG), since it was linked to a 2-3 °C surface water warming during warm stages from 2.95 to 2.82 Ma. These records support previous models, claiming that the final closure of the Panama Isthmus (3.0- ~2.5 Ma induced an increased poleward salt and heat transport. Associated strengthening of North Atlantic Thermohaline Circulation and in turn, an intensified moisture supply to northern high latitudes resulted in the build-up of NHG, finally culminating in the great, irreversible climate crash at marine isotope stage G6 (2.74 Ma). In summary, there was a two-step threshold mechanism that marked the onset of NHG with glacial-to-interglacial cycles quasi-persistent until today.
Resumo:
A multivariable approach utilising bulk sediment, planktonic Foraminifera and siliceous phytoplankton has been used to reconstruct rapid variations in palaeoproductivity in the Peru-Chile Current System off northern Chile for the past 19000 cal. yr. During the early deglaciation (19000-16000 cal. yr BP), our data point to strongest upwelling intensity and highest productivity of the past 19 000 cal. yr. The late deglaciation (16000-13000 cal. yr BP) is characterised by a major change in the oceanographic setting, warmer water masses and weaker upwelling at the study site. Lowest productivity and weakest upwelling intensity are observed from the early to the middle Holocene (13000-4000 cal. yr BP), and the beginning of the late Holocene (<4000 cal. yr BP) is marked by increasing productivity, mainly driven by silicate-producing organisms. Changes in the productivity and upwelling intensity in our record may have resulted from a large-scale compression and/or displacement of the South Pacific subtropical gyre during more productive periods, in line with a northward extension of the Antarctic Circumpolar Current and increased advection of Antarctic water masses with the Peru-Chile Current. The corresponding increase in hemispheric thermal gradient and wind stress induced stronger upwelling. During the periods of lower productivity, this scenario probably reversed.
Resumo:
Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.
Resumo:
Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
This paper shows the role that some foresight tools, such as scenario design, may play in exploring the future impacts of global challenges in our contemporary Society. Additionally, it provides some clues about how to reinforce scenario design so that it displays more in-depth analysis without losing its qualitative nature and communication advantages. Since its inception in the early seventies, scenario design has become one of the most popular foresight tools used in several fields of knowledge. Nevertheless, its wide acceptance has not been seconded by the urban planning academic and professional realm. In some instances, scenario design is just perceived as a story telling technique that generates oversimplified future visions without the support of rigorous and sound analysis. As a matter of fact, the potential of scenario design for providing more in-depth analysis and for connecting with quantitative methods has been generally missed, giving arguments away to its critics. Based on these premises, this document tries to prove the capability of scenario design to anticipate the impacts of complex global challenges and to do it in a more analytical way. These assumptions are tested through a scenario design exercise which explores the future evolution of the sustainable development paradigm (SD) and its implications in the Spanish urban development model. In order to reinforce the perception of scenario design as a useful and added value instrument to urban planners, three sets of implications –functional, parametric and spatial— are displayed to provide substantial and in-depth information for policy makers. This study shows some major findings. First, it is feasible to set up a systematic approach that provides anticipatory intelligence about future disruptive events that may affect the natural environment and socioeconomic fabric of a given territory. Second, there are opportunities for innovating in the Spanish urban planning processes and city governance models. Third, as a foresight tool, scenario design can be substantially reinforced if proper efforts are made to display functional, parametric and spatial implications generated by the scenarios. Fourth, the study confirms that foresight offers interesting opportunities for urban planners, such as anticipating changes, formulating visions, fostering participation and building networks