989 resultados para Standard Emissions Factor
Resumo:
BACKGROUND: Over the last 4 years ADAMTS-13 measurement underwent dramatic progress with newer and simpler methods. AIMS: Blind evaluation of newer methods for their performance characteristics. DESIGN: The literature was searched for new methods and the authors invited to join the evaluation. Participants were provided with a set of 60 coded frozen plasmas that were prepared centrally by dilutions of one ADAMTS-13-deficient plasma (arbitrarily set at 0%) into one normal-pooled plasma (set at 100%). There were six different test plasmas ranging from 100% to 0%. Each plasma was tested 'blind' 10 times by each method and results expressed as percentage vs. the local and the common standard provided by the organizer. RESULTS: There were eight functional and three antigen assays. Linearity of observed-vs.-expected ADAMTS-13 levels assessed as r2 ranged from 0.931 to 0.998. Between-run reproducibility expressed as the (mean) CV for repeated measurements was below 10% for three methods, 10-15% for five methods and up to 20% for the remaining three. F-values (analysis of variance) calculated to assess the capacity to distinguish between ADAMTS-13 levels (the higher the F-value, the better the capacity) ranged from 3965 to 137. Between-method variability (CV) amounted to 24.8% when calculated vs. the local and to 20.5% when calculated vs. the common standard. Comparative analysis showed that functional assays employing modified von Willebrand factor peptides as substrate for ADAMTS-13 offer the best performance characteristics. CONCLUSIONS: New assays for ADAMTS-13 have the potential to make the investigation/management of patients with thrombotic microangiopathies much easier than in the past.
Resumo:
BACKGROUND: Though guidelines emphasize low-density lipoprotein cholesterol (LDL-C) lowering as an essential strategy for cardiovascular risk reduction, achieving target levels may be difficult. PATIENTS AND METHODS: The authors conducted a prospective, controlled, open-label trial examining the effectiveness and safety of high-dose fluvastatin or a standard dosage of simvastatin plus ezetimibe, both with an intensive guideline-oriented cardiac rehabilitation program, in achieving the new ATP III LDL-C targets in patients with proven coronary artery disease. 305 consecutive patients were enrolled in the study. Patients were divided into two groups: the simvastatin (40 mg/d) plus ezetimibe (10 mg/d) and the fluvastatin-only group (80 mg/d). Patients in both study groups received the treatment for 21 days in addition to nonpharmacological measures, including advanced physical, dietary, psychosocial, and educational activities. RESULTS: After 21 days of treatment, a significant reduction in LDL-C was found in both study groups as compared to the initial values, however, the reduction in LDL-C was significantly stronger in the simvastatin plus ezetimibe group: simvastatin plus ezetimibe treatment decreased LDL-C to a mean level of 57.7 +/- 1.7 mg/ml, while fluvastatin achieved a reduction to 84.1 +/- 2.4 mg/ml (p < 0.001). In the simvastatin plus ezetimibe group, 95% of the patients reached the target level of LDL-C < 100 mg/dl. This percentage was significantly higher than in patients treated with fluvastatin alone (75%; p < 0.001). The greater effectiveness of simvastatin plus ezetimibe was more impressive when considering the optional goal of LDL-C < 70 mg/dl (75% vs. 32%, respectively; p < 0.001). There was no difference in occurrence of adverse events between both groups. CONCLUSION: Simvastatin 40 mg/d plus ezetimibe 10 mg/d, on the background of a guideline-oriented standardized intensive cardiac rehabilitation program, can reach 95% effectiveness in achieving challenging goals (LDL < 100 mg/dl) using lipid-lowering medication in patients at high cardiovascular risk.
Resumo:
OBJECT: Early impairment of cerebral blood flow in patients with severe head injury correlates with poor brain tissue O2 delivery and may be an important cause of ischemic brain damage. The purpose of this study was to measure cerebral tissue PO2, lactate, and glucose in patients after severe head injury to determine the effect of increased tissue O2 achieved by increasing the fraction of inspired oxygen (FiO2). METHODS: In addition to standard monitoring of intracranial pressure and cerebral perfusion pressure, the authors continuously measured brain tissue PO2, PCO2, pH, and temperature in 22 patients with severe head injury. Microdialysis was performed to analyze lactate and glucose levels. In one cohort of 12 patients, the PaO2 was increased to 441+/-88 mm Hg over a period of 6 hours by raising the FiO2 from 35+/-5% to 100% in two stages. The results were analyzed and compared with the findings in a control cohort of 12 patients who received standard respiratory therapy (mean PaO2 136.4+/-22.1 mm Hg). The mean brain PO2 levels increased in the O2-treated patients up to 359+/-39% of the baseline level during the 6-hour FiO2 enhancement period, whereas the mean dialysate lactate levels decreased by 40% (p < 0.05). During this O2 enhancement period, glucose levels in brain tissue demonstrated a heterogeneous course. None of the monitored parameters in the control cohort showed significant variations during the entire observation period. CONCLUSIONS: Markedly elevated lactate levels in brain tissue are common after severe head injury. Increasing PaO2 to higher levels than necessary to saturate hemoglobin, as performed in the O2-treated cohort, appears to improve the O2 supply in brain tissue. During the early period after severe head injury, increased lactate levels in brain tissue were reduced by increasing FiO2. This may imply a shift to aerobic metabolism.
Resumo:
ntense liver regeneration and almost 100% survival follows partial hepatectomy of up to 70% of liver mass in rodents. More extensive resections of 70 to 80% have an increased mortality and partial hepatectomies of >80% constantly lead to acute hepatic failure and death in mice. The aim of the study was to determine the effect of systemically administered granulocyte colony stimulating factor (G-CSF) on animal survival and liver regeneration in a small for size liver remnant mouse model after 83% partial hepatectomy (liver weight <0.8% of mouse body weight). Methods: Male Balb C mice (n=80, 20-24g) were preconditioned daily for five days with 5μg G-CSF subcutaneously or sham injected (aqua ad inj). Subsequently 83% hepatic resection was performed and daily sham or G-CSF injection continued. Survival was determined in both groups (G-CSF n=35; Sham: n=33). In a second series BrdU was injected (50mg/kg Body weight) two hours prior to tissue harvest and animals euthanized 36 and 48 hours after 83% liver resection (n=3 each group). To measure hepatic regeneration the BrdU labeling index and Ki67 expression were determined by immunohistochemistry by two independent observers. Harvested liver tissue was dried to constant weight at 65 deg C for 48 hours. Results: Survival was 0% in the sham group on day 3 postoperatively and significantly better (26.2% on day 7 and thereafter) in the G-CSF group (Log rank test: p<0.0001). Dry liver weight was increased in the G-CSF group (T-test: p<0.05) 36 hours after 83% partial hepatectomy. Ki67 expression was elevated in the G-CSF group at 36 hours (2.8±2.6% (Standard deviation) vs 0.03±0.2%; Rank sum test: p<0.0001) and at 48 hours (45.1±34.6% vs 0.7±1.0%; Rank sum test: p<0.0001) after 83% liver resection. BrdU labeling at 48 hours was 0.1±0.3% in the sham and 35.2±34.2% in the G-CSF group (Rank sum test: p<0.0001) Conclusions: The surgical 83% resection mouse model is suitable to test hepatic supportive regimens in the setting of small for size liver remnants. Administration of G-CSF supports hepatic regeneration after microsurgical 83% partial hepatectomy and leads to improved long-term survival in the mouse. G-CSF might prove to be a clinically valuable supportive substance in small for size liver remnants in humans after major hepatic resections due to primary or secondary liver tumors or in the setting of living related liver donation.
Resumo:
PURPOSE: Glioblastomas are notorious for resistance to therapy, which has been attributed to DNA-repair proficiency, a multitude of deregulated molecular pathways, and, more recently, to the particular biologic behavior of tumor stem-like cells. Here, we aimed to identify molecular profiles specific for treatment resistance to the current standard of care of concomitant chemoradiotherapy with the alkylating agent temozolomide. PATIENTS AND METHODS: Gene expression profiles of 80 glioblastomas were interrogated for associations with resistance to therapy. Patients were treated within clinical trials testing the addition of concomitant and adjuvant temozolomide to radiotherapy. RESULTS: An expression signature dominated by HOX genes, which comprises Prominin-1 (CD133), emerged as a predictor for poor survival in patients treated with concomitant chemoradiotherapy (n = 42; hazard ratio = 2.69; 95% CI, 1.38 to 5.26; P = .004). This association could be validated in an independent data set. Provocatively, the HOX cluster was reminiscent of a "self-renewal" signature (P = .008; Gene Set Enrichment Analysis) recently characterized in a mouse leukemia model. The HOX signature and EGFR expression were independent prognostic factors in multivariate analysis, adjusted for the O-6-methylguanine-DNA methyltransferase (MGMT) methylation status, a known predictive factor for benefit from temozolomide, and age. Better outcome was associated with gene clusters characterizing features of tumor-host interaction including tumor vascularization and cell adhesion, and innate immune response. CONCLUSION: This study provides first clinical evidence for the implication of a "glioma stem cell" or "self-renewal" phenotype in treatment resistance of glioblastoma. Biologic mechanisms identified here to be relevant for resistance will guide future targeted therapies and respective marker development for individualized treatment and patient selection.
Resumo:
The U.S. Renewable Fuel Standard mandates that by 2022, 36 billion gallons of renewable fuels must be produced on a yearly basis. Ethanol production is capped at 15 billion gallons, meaning 21 billion gallons must come from different alternative fuel sources. A viable alternative to reach the remainder of this mandate is iso-butanol. Unlike ethanol, iso-butanol does not phase separate when mixed with water, meaning it can be transported using traditional pipeline methods. Iso-butanol also has a lower oxygen content by mass, meaning it can displace more petroleum while maintaining the same oxygen concentration in the fuel blend. This research focused on studying the effects of low level alcohol fuels on marine engine emissions to assess the possibility of using iso-butanol as a replacement for ethanol. Three marine engines were used in this study, representing a wide range of what is currently in service in the United States. Two four-stroke engine and one two-stroke engine powered boats were tested in the tributaries of the Chesapeake Bay, near Annapolis, Maryland over the course of two rounds of weeklong testing in May and September. The engines were tested using a standard test cycle and emissions were sampled using constant volume sampling techniques. Specific emissions for two-stroke and four-stroke engines were compared to the baseline indolene tests. Because of the nature of the field testing, limited engine parameters were recorded. Therefore, the engine parameters analyzed aside from emissions were the operating relative air-to-fuel ratio and engine speed. Emissions trends from the baseline test to each alcohol fuel for the four-stroke engines were consistent, when analyzing a single round of testing. The same trends were not consistent when comparing separate rounds because of uncontrolled weather conditions and because the four-stroke engines operate without fuel control feedback during full load conditions. Emissions trends from the baseline test to each alcohol fuel for the two-stroke engine were consistent for all rounds of testing. This is due to the fact the engine operates open-loop, and does not provide fueling compensation when fuel composition changes. Changes in emissions with respect to the baseline for iso-butanol were consistent with changes for ethanol. It was determined iso-butanol would make a viable replacement for ethanol.
Resumo:
The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.
Resumo:
Over the past several decades, it has become apparent that anthropogenic activities have resulted in the large-scale enhancement of the levels of many trace gases throughout the troposphere. More recently, attention has been given to the transport pathway taken by these emissions as they are dispersed throughout the atmosphere. The transport pathway determines the physical characteristics of emissions plumes and therefore plays an important role in the chemical transformations that can occur downwind of source regions. For example, the production of ozone (O3) is strongly dependent upon the transport its precursors undergo. O3 can initially be formed within air masses while still over polluted source regions. These polluted air masses can experience continued O3 production or O3 destruction downwind, depending on the air mass's chemical and transport characteristics. At present, however, there are a number of uncertainties in the relationships between transport and O3 production in the North Atlantic lower free troposphere. The first phase of the study presented here used measurements made at the Pico Mountain observatory and model simulations to determine transport pathways for US emissions to the observatory. The Pico Mountain observatory was established in the summer of 2001 in order to address the need to understand the relationships between transport and O3 production. Measurements from the observatory were analyzed in conjunction with model simulations from the Lagrangian particle dispersion model (LPDM), FLEX-PART, in order to determine the transport pathway for events observed at the Pico Mountain observatory during July 2003. A total of 16 events were observed, 4 of which were analyzed in detail. The transport time for these 16 events varied from 4.5 to 7 days, while the transport altitudes over the ocean ranged from 2-8 km, but were typically less than 3 km. In three of the case studies, eastward advection and transport in a weak warm conveyor belt (WCB) airflow was responsible for the export of North American emissions into the FT, while transport in the FT was governed by easterly winds driven by the Azores/Bermuda High (ABH) and transient northerly lows. In the fourth case study, North American emissions were lofted to 6-8 km in a WCB before being entrained in the same cyclone's dry airstream and transported down to the observatory. The results of this study show that the lower marine FT may provide an important transport environment where O3 production may continue, in contrast to transport in the marine boundary layer, where O3 destruction is believed to dominate. The second phase of the study presented here focused on improving the analysis methods that are available with LPDMs. While LPDMs are popular and useful for the analysis of atmospheric trace gas measurements, identifying the transport pathway of emissions from their source to a receptor (the Pico Mountain observatory in our case) using the standard gridded model output, particularly during complex meteorological scenarios can be difficult can be difficult or impossible. The transport study in phase 1 was limited to only 1 month out of more than 3 years of available data and included only 4 case studies out of the 16 events specifically due to this confounding factor. The second phase of this study addressed this difficulty by presenting a method to clearly and easily identify the pathway taken by only those emissions that arrive at a receptor at a particular time, by combining the standard gridded output from forward (i.e., concentrations) and backward (i.e., residence time) LPDM simulations, greatly simplifying similar analyses. The ability of the method to successfully determine the source-to-receptor pathway, restoring this Lagrangian information that is lost when the data are gridded, is proven by comparing the pathway determined from this method with the particle trajectories from both the forward and backward models. A sample analysis is also presented, demonstrating that this method is more accurate and easier to use than existing methods using standard LPDM products. Finally, we discuss potential future work that would be possible by combining the backward LPDM simulation with gridded data from other sources (e.g., chemical transport models) to obtain a Lagrangian sampling of the air that will eventually arrive at a receptor.
Resumo:
Tropical wetlands are estimated to represent about 50% of the natural wetland methane (CH4) emissions and explain a large fraction of the observed CH4 variability on timescales ranging from glacial–interglacial cycles to the currently observed year-to-year variability. Despite their importance, however, tropical wetlands are poorly represented in global models aiming to predict global CH4 emissions. This publication documents a first step in the development of a process-based model of CH4 emissions from tropical floodplains for global applications. For this purpose, the LPX-Bern Dynamic Global Vegetation Model (LPX hereafter) was slightly modified to represent floodplain hydrology, vegetation and associated CH4 emissions. The extent of tropical floodplains was prescribed using output from the spatially explicit hydrology model PCR-GLOBWB. We introduced new plant functional types (PFTs) that explicitly represent floodplain vegetation. The PFT parameterizations were evaluated against available remote-sensing data sets (GLC2000 land cover and MODIS Net Primary Productivity). Simulated CH4 flux densities were evaluated against field observations and regional flux inventories. Simulated CH4 emissions at Amazon Basin scale were compared to model simulations performed in the WETCHIMP intercomparison project. We found that LPX reproduces the average magnitude of observed net CH4 flux densities for the Amazon Basin. However, the model does not reproduce the variability between sites or between years within a site. Unfortunately, site information is too limited to attest or disprove some model features. At the Amazon Basin scale, our results underline the large uncertainty in the magnitude of wetland CH4 emissions. Sensitivity analyses gave insights into the main drivers of floodplain CH4 emission and their associated uncertainties. In particular, uncertainties in floodplain extent (i.e., difference between GLC2000 and PCR-GLOBWB output) modulate the simulated emissions by a factor of about 2. Our best estimates, using PCR-GLOBWB in combination with GLC2000, lead to simulated Amazon-integrated emissions of 44.4 ± 4.8 Tg yr−1. Additionally, the LPX emissions are highly sensitive to vegetation distribution. Two simulations with the same mean PFT cover, but different spatial distributions of grasslands within the basin, modulated emissions by about 20%. Correcting the LPX-simulated NPP using MODIS reduces the Amazon emissions by 11.3%. Finally, due to an intrinsic limitation of LPX to account for seasonality in floodplain extent, the model failed to reproduce the full dynamics in CH4 emissions but we proposed solutions to this issue. The interannual variability (IAV) of the emissions increases by 90% if the IAV in floodplain extent is accounted for, but still remains lower than in most of the WETCHIMP models. While our model includes more mechanisms specific to tropical floodplains, we were unable to reduce the uncertainty in the magnitude of wetland CH4 emissions of the Amazon Basin. Our results helped identify and prioritize directions towards more accurate estimates of tropical CH4 emissions, and they stress the need for more research to constrain floodplain CH4 emissions and their temporal variability, even before including other fundamental mechanisms such as floating macrophytes or lateral water fluxes.
Resumo:
The large, rapid increase in atmospheric N2O concentrations that occurred concurrent with the abrupt warming at the end of the Last Glacial period might have been the result of a reorganization in global biogeochemical cycles. To explore the sensitivity of nitrogen cycling in terrestrial ecosystems to abrupt warming, we combined a scenario of climate and vegetation composition change based on multiproxy data for the Oldest Dryas–Bølling abrupt warming event at Gerzensee, Switzerland, with a biogeochemical model that simulates terrestrial N uptake and release, including N2O emissions. As for many central European sites, the pollen record at the Gerzensee is remarkable for the abundant presence of the symbiotic nitrogen fixer Hippophaë rhamnoides (L.) during the abrupt warming that also marks the beginning of primary succession on immature glacial soils. Here we show that without additional nitrogen fixation, climate change results in a significant increase of N2O emissions of approximately factor 3.4 (from 6.4 ± 1.9 to 21.6 ± 5.9 mg N2O–N m− 2 yr− 1). Each additional 1000 mg m− 2 yr− 1 of nitrogen added to the ecosystem through N-fixation results in additional N2O emissions of 1.6 mg N2O–N m− 2 yr− 1 for the time with maximum H. rhamnoides coverage. Our results suggest that local reactions of emissions to abrupt climate change could have been considerably faster than the overall atmospheric concentration changes observed in polar ice. Nitrogen enrichment of soils due to the presence of symbiotic N-fixers during early primary succession not only facilitates the establishment of vegetation on soils in their initial stage of development, but can also have considerable influence on biogeochemical cycles and the release of reactive nitrogen trace gases to the atmosphere.
Resumo:
Neoadjuvant platin-based therapy is accepted as a standard therapy for advanced esophageal adenocarcinoma (EAC). Patients who respond have a better survival prognosis, but still a significant number of responder patients die from tumor recurrence. Molecular markers for prognosis in neoadjuvantly treated EAC patients have not been identified yet. We investigated the epidermal growth factor receptor (EGFR) in prognosis and chemotherapy resistance in these patients. Two EAC patient cohorts, either treated by neoadjuvant cisplatin-based chemotherapy followed by surgery (n=86) or by surgical resection (n=46) were analyzed for EGFR protein expression and gene copy number. Data were correlated with clinical and histopathological response, disease-free and overall survival. In case of EGFR overexpression, the prognosis for neoadjuvant chemotherapy responders was poor as in non-responders. Responders had a significantly better disease-free survival than non-responders only if EGFR expression level (p=0.0152) or copy number (p=0.0050) was low. Comparing neoadjuvantly treated patients and primary resection patients, tumors of non-responder patients more frequently exhibited EGFR overexpression, providing evidence that EGFR is a factor for indicating chemotherapy resistance. EGFR overexpression and gene copy number are independent adverse prognostic factors for neoadjuvant chemotherapy-treated EAC patients, particularly for responders. Furthermore, EGFR overexpression is involved in resistance to cisplatin-based neoadjuvant chemotherapy.
Resumo:
BACKGROUND Due to the implementation of the diagnosis-related groups (DRG) system, the competitive pressure on German hospitals increased. In this context it has been shown that acute pain management offers economic benefits for hospitals. The aim of this study was to analyze the impact of the competitive situation, the ownership and the economic resources required on structures and processes for acute pain management. MATERIAL AND METHODS A standardized questionnaire on structures and processes of acute pain management was mailed to the 885 directors of German departments of anesthesiology listed as members of the German Society of Anesthesiology and Intensive Care Medicine (DGAI, Deutsche Gesellschaft für Anästhesiologie und Intensivmedizin). RESULTS For most hospitals a strong regional competition existed; however, this parameter affected neither the implementation of structures nor the recommended treatment processes for pain therapy. In contrast, a clear preference for hospitals in private ownership to use the benchmarking tool QUIPS (quality improvement in postoperative pain therapy) was found. These hospitals also presented information on coping with the management of pain in the corporate clinic mission statement more often and published information about the quality of acute pain management in the quality reports more frequently. No differences were found between hospitals with different forms of ownership in the implementation of acute pain services, quality circles, expert standard pain management and the implementation of recommended processes. Hospitals with a higher case mix index (CMI) had a certified acute pain management more often. The corporate mission statement of these hospitals also contained information on how to cope with pain, presentation of the quality of pain management in the quality report, implementation of quality circles and the implementation of the expert standard pain management more frequently. There were no differences in the frequency of using the benchmarking tool QUIPS or the implementation of recommended treatment processes with respect to the CMI. CONCLUSION In this survey no effect of the competitive situation of hospitals on acute pain management could be demonstrated. Private ownership and a higher CMI were more often associated with structures of acute pain management which were publicly accessible in terms of hospital marketing.
Resumo:
Treatment allocation by epidermal growth factor receptor mutation status is a new standard in patients with metastatic nonesmall-cell lung cancer. Yet, relatively few modern chemotherapy trials were conducted in patients characterized by epidermal growth factor receptor wild type. We describe the results of a multicenter phase II trial, testing in parallel 2 novel combination therapies, predefined molecular markers, and tumor rebiopsy at progression. Objective: The goal was to demonstrate that tailored therapy, according to tumor histology and epidermal growth factor receptor (EGFR) mutation status, and the introduction of novel drug combinations in the treatment of advanced nonesmall-cell lung cancer are promising for further investigation. Methods: We conducted a multicenter phase II trial with mandatory EGFR testing and 2 strata. Patients with EGFR wild type received 4 cycles of bevacizumab, pemetrexed, and cisplatin, followed by maintenance with bevacizumab and pemetrexed until progression. Patients with EGFR mutations received bevacizumab and erlotinib until progression. Patients had computed tomography scans every 6 weeks and repeat biopsy at progression. The primary end point was progression-free survival (PFS) ≥ 35% at 6 months in stratum EGFR wild type; 77 patients were required to reach a power of 90% with an alpha of 5%. Secondary end points were median PFS, overall survival, best overall response rate (ORR), and tolerability. Further biomarkers and biopsy at progression were also evaluated. Results: A total of 77 evaluable patients with EGFR wild type received an average of 9 cycles (range, 1-25). PFS at 6 months was 45.5%, median PFS was 6.9 months, overall survival was 12.1 months, and ORR was 62%. Kirsten rat sarcoma oncogene mutations and circulating vascular endothelial growth factor negatively correlated with survival, but thymidylate synthase expression did not. A total of 20 patients with EGFR mutations received an average of 16.
Resumo:
Decision strategies aim at enabling reasonable decisions in cases of uncertain policy decision problems which do not meet the conditions for applying standard decision theory. This paper focuses on decision strategies that account for uncertainties by deciding whether a proposed list of policy options should be accepted or revised (scope strategies) and whether to decide now or later (timing strategies). They can be used in participatory approaches to structure the decision process. As a basis, we propose to classify the broad range of uncertainties affecting policy decision problems along two dimensions, source of uncertainty (incomplete information, inherent indeterminacy and unreliable information) and location of uncertainty (information about policy options, outcomes and values). Decision strategies encompass multiple and vague criteria to be deliberated in application. As an example, we discuss which decision strategies may account for the uncertainties related to nutritive technologies that aim at reducing methane (CH4) emissions from ruminants as a means of mitigating climate change, limiting our discussion to published scientific information. These considerations not only speak in favour of revising rather than accepting the discussed list of options, but also in favour of active postponement or semi-closure of decision-making rather than closure or passive postponement.
Resumo:
We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.