17 resultados para Dynamic Marginal Cost

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this thesis is to find out how dominant firms in a liberalised electricity market will react when they face an increase in the level of costs due to emissions trading, and how this will effect the price of electricity. The Nordic electricity market is chosen as the setting in which to examine the question, since recent studies on the subject suggest that interaction between electricity markets and emissions trading is very much dependent on conditions specific to each market area. There is reason to believe that imperfect competition prevails in the Nordic market, thus the issue is approached through the theory of oligopolistic competition. The generation capacity available at the market, marginal cost of electricity production and seasonal levels of demand form the data based on which the dominant firms are modelled using the Cournot model of competition. The calculations are made for two levels of demand, high and low, and with several values of demand elasticity. The producers are first modelled under no carbon costs and then by adding the cost of carbon dioxide at 20€/t to those technologies subject to carbon regulation. In all cases the situation under perfect competition is determined as a comparison point for the results of the Cournot game. The results imply that the potential for market power does exist on the Nordic market, but the possibility for exercising market power depends on the demand level. In season of high demand the dominant firms may raise the price significantly above competitive levels, and the situation is aggravated when the cost of carbon dioixide is accounted for. Under low demand leves there is no difference between perfect and imperfect competition. The results are highly dependent on the price elasticity of demand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The model simulates the growth mechanism of a pig under optional feeding and slaughter patterns and then solves the optimal feeding and slaughter decisions recursively. The state of nature and the genotype of a pig are known in the analysis. The main contribution of this study is the dynamic approach that explicitly takes into account carcass quality while simultaneously optimising feeding and slaughter decisions. The method maximises the internal rate of return to the capacity unit. Hence, the results can have vital impact on competitiveness of pig production, which is known to be quite capital-intensive. The results suggest that producer can significantly benefit from improvements in the pig's genotype, because they improve efficiency of pig production. The annual benefits from obtaining pigs of improved genotype can be more than €20 per capacity unit. The annual net benefits of animal breeding to pig farms can also be considerable. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results suggest that the producer can benefit from flexible feeding technology. The flexible feeding technology segregates pigs into groups according to their weight, carcass leanness, genotype and sex and thereafter optimises feeding and slaughter decisions separately for these groups. Typically, such a technology provides incentives to feed piglets with protein-rich feed such that the genetic potential to produce leaner meat is fully utilised. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig's biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects. Key words: barley, carcass composition, dynamic programming, feeding, genotypes, lean, pig fattening, precision agriculture, productivity, slaughter weight, soybeans

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change is the single biggest environmental problem in the world at the moment. Although the effects are still not fully understood and there is considerable amount of uncertainty, many na-tions have decided to mitigate the change. On the societal level, a planner who tries to find an eco-nomically optimal solution to an environmental pollution problem seeks to reduce pollution from the sources where reductions are most cost-effective. This study aims to find out how effective the instruments of the agricultural policy are in the case of climate change mitigation in Finland. The theoretical base of this study is the neoclassical economic theory that is based on the assumption of a rational economic agent who maximizes his own utility. This theoretical base has been widened towards the direction clearly essential to the matter: the theory of environmental eco-nomics. Deeply relevant to this problem and central in the theory of environmental economics are the concepts of externalities and public goods. What are also relevant are the problems of global pollution and non-point-source pollution. Econometric modelling was the method that was applied to this study. The Finnish part of the AGMEMOD-model, covering the whole EU, was used for the estimation of the development of pollution. This model is a seemingly recursive, partially dynamic partial-equilibrium model that was constructed to predict the development of Finnish agricultural production of the most important products. For the study, I personally updated the model and also widened its scope in some relevant matters. Also, I devised a table that can calculate the emissions of greenhouse gases according to the rules set by the IPCC. With the model I investigated five alternative scenarios in comparison to the base-line scenario of Agenda 2000 agricultural policy. The alternative scenarios were: 1) the CAP reform of 2003, 2) free trade on agricultural commodities, 3) technological change, 4) banning the cultivation of organic soils and 5) the combination of the last three scenarios as the maximal achievement in reduction. The maximal achievement in the alternative scenario 5 was 1/3 of the level achieved on the base-line scenario. CAP reform caused only a minor reduction when com-pared to the base-line scenario. Instead, the free trade scenario and the scenario of technological change alone caused a significant reduction. The biggest single reduction was achieved by banning the cultivation of organic land. However, this was also the most questionable scenario to be real-ized, the reasons for this are further elaborated in the paper. The maximal reduction that can be achieved in the Finnish agricultural sector is about 11 % of the emission reduction that is needed to comply with the Kyoto protocol.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Finland one of the most important current issues in the environmental management is the quality of surface waters. The increasing social importance of lakes and water systems has generated wide-ranging interest in lake restoration and management, concerning especially lakes suffering from eutrophication, but also from other environmental impacts. Most of the factors deteriorating the water quality in Finnish lakes are connected to human activities. Especially since the 1940's, the intensified farming practices and conduction of sewage waters from scattered settlements, cottages and industry have affected the lakes, which simultaneously have developed in to recreational areas for a growing number of people. Therefore, this study was focused on small lakes, which are human impacted, located close to settlement areas and have a significant value for local population. The aim of this thesis was to obtain information from lake sediment records for on-going lake restoration activities and to prove that a well planned, properly focused lake sediment study is an essential part of the work related to evaluation, target consideration and restoration of Finnish lakes. Altogether 11 lakes were studied. The study of Lake Kaljasjärvi was related to the gradual eutrophication of the lake. In lakes Ormajärvi, Suolijärvi, Lehee, Pyhäjärvi and Iso-Roine the main focus was on sediment mapping, as well as on the long term changes of the sedimentation, which were compared to Lake Pääjärvi. In Lake Hormajärvi the role of different kind of sedimentation environments in the eutrophication development of the lake's two basins were compared. Lake Orijärvi has not been eutrophied, but the ore exploitation and related acid main drainage from the catchment area have influenced the lake drastically and the changes caused by metal load were investigated. The twin lakes Etujärvi and Takajärvi are slightly eutrophied, but also suffer problems associated with the erosion of the substantial peat accumulations covering the fringe areas of the lakes. These peat accumulations are related to Holocene water level changes, which were investigated. The methods used were chosen case-specifically for each lake. In general, acoustic soundings of the lakes, detailed description of the nature of the sediment and determinations of the physical properties of the sediment, such as water content, loss on ignition and magnetic susceptibility were used, as was grain size analysis. A wide set of chemical analyses was also used. Diatom and chrysophycean cyst analyses were applied, and the diatom inferred total phosphorus content was reconstructed. The results of these studies prove, that the ideal lake sediment study, as a part of a lake management project, should be two-phased. In the first phase, thoroughgoing mapping of sedimentation patterns should be carried out by soundings and adequate corings. The actual sampling, based on the preliminary results, must include at least one long core from the main sedimentation basin for the determining the natural background state of the lake. The recent, artificially impacted development of the lake can then be determined by short-core and surface sediment studies. The sampling must be focused on the basis of the sediment mapping again, and it should represent all different sedimentation environments and bottom dynamic zones, considering the inlets and outlets, as well as the effects of possible point loaders of the lake. In practice, the budget of the lake management projects of is usually limited and only the most essential work and analyses can be carried out. The set of chemical and biological analyses and dating methods must therefore been thoroughly considered and adapted to the specific management problem. The results show also, that information obtained from a properly performed sediment study enhances the planning of the restoration, makes possible to define the target of the remediation activities and improves the cost-efficiency of the project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous many-to-many information dissemination. Event systems are widely used, because asynchronous messaging provides a flexible alternative to RPC (Remote Procedure Call). They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. The filters are organized into a routing table in order to forward incoming events to proper subscribers and neighbouring routers. This thesis addresses the optimization of content-based routing tables organized using the covering relation and presents novel data structures and configurations for improving local and distributed operation. Data structures are needed for organizing filters into a routing table that supports efficient matching and runtime operation. We present novel results on dynamic filter merging and the integration of filter merging with content-based routing tables. In addition, the thesis examines the cost of client mobility using different protocols and routing topologies. We also present a new matching technique called temporal subspace matching. The technique combines two new features. The first feature, temporal operation, supports notifications, or content profiles, that persist in time. The second feature, subspace matching, allows more expressive semantics, because notifications may contain intervals and be defined as subspaces of the content space. We also present an application of temporal subspace matching pertaining to metadata-based continuous collection and object tracking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The information that the economic agents have and regard relevant to their decision making is often assumed to be exogenous in economics. It is assumed that the agents either poses or can observe the payoff relevant information without having to exert any effort to acquire it. In this thesis we relax the assumption of ex-ante fixed information structure and study what happens to the equilibrium behavior when the agents must also decide what information to acquire and when to acquire it. This thesis addresses this question in the two essays on herding and two essays on auction theory. In the first two essays, that are joint work with Klaus Kultti, we study herding models where it is costly to acquire information on the actions that the preceding agents have taken. In our model the agents have to decide both the action that they take and additionally the information that they want to acquire by observing their predecessors. We characterize the equilibrium behavior when the decision to observe preceding agents' actions is endogenous and show how the equilibrium outcome may differ from the standard model, where all preceding agents actions are assumed to be observable. In the latter part of this thesis we study two dynamic auctions: the English and the Dutch auction. We consider a situation where bidder(s) are uninformed about their valuations for the object that is put up for sale and they may acquire this information for a small cost at any point during the auction. We study the case of independent private valuations. In the third essay of the thesis we characterize the equilibrium behavior in an English auction when there are informed and uninformed bidders. We show that the informed bidder may jump bid and signal to the uninformed that he has a high valuation, thus deterring the uninformed from acquiring information and staying in the auction. The uninformed optimally acquires information once the price has passed a particular threshold and the informed has not signalled that his valuation is high. In addition, we provide an example of an information structure where the informed bidder initially waits and then makes multiple jumps. In the fourth essay of this thesis we study the Dutch auction. We consider two cases where all bidders are all initially uninformed. In the first case the information acquisition cost is the same across all bidders and in the second also the cost of information acquisition is independently distributed and private information to the bidders. We characterize a mixed strategy equilibrium in the first and a pure strategy equilibrium in the second case. In addition we provide a conjecture of an equilibrium in an asymmetric situation where there is one informed and one uninformed bidder. We compare the revenues that the first price auction and the Dutch auction generate and we find that under some circumstances the Dutch auction outperforms the first price sealed bid auction. The usual first price sealed bid auction and the Dutch auction are strategically equivalent. However, this equivalence breaks down in case information is acquired during the auction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addressed the large-scale molecular zoogeography in two brackish water bivalve molluscs, Macoma balthica and Cerastoderma glaucum, and genetic signatures of the postglacial colonization of Northern Europe by them. The traditional view poses that M. balthica in the Baltic, White and Barents seas (i.e. marginal seas) represent direct postglacial descendants of the adjacent Northeast Atlantic populations, but this has recently been challenged by observations of close genetic affinities between these marginal populations and those of the Northeast Pacific. The primary aim of the thesis was to verify, quantify and characterize the Pacific genetic contribution across North European populations of M. balthica and to resolve the phylogeographic histories of the two bivalve taxa in range-wide studies using information from mitochondrial DNA (mtDNA) and nuclear allozyme polymorphisms. The presence of recent Pacific genetic influence in M. balthica of the Baltic, White and Barents seas, along with an Atlantic element, was confirmed by mtDNA sequence data. On a broader temporal and geographical scale, altogether four independent trans-Arctic invasions of Macoma from the Pacific since the Miocene seem to have been involved in generating the current North Atlantic lineage diversity. The latest trans-Arctic invasion that affected the current Baltic, White and Barents Sea populations probably took place in the early post-glacial. The nuclear genetic compositions of these marginal sea populations are intermediate between those of pure Pacific and Atlantic subspecies. In the marginal sea populations of mixed ancestry (Barents, White and Northern Baltic seas), the Pacific and Atlantic components are now randomly associated in the genomes of individual clams, which indicates both pervasive historical interbreeding between the previously long-isolated lineages (subspecies), and current isolation of these populations from the adjacent pure Atlantic populations. These mixed populations can be characterized as self-supporting hybrid swarms, and they arguably represent the most extensive marine animal hybrid swarms so far documented. Each of the three swarms still has a distinct genetic composition, and the relative Pacific contributions vary from 30 to 90 % in local populations. This diversity highlights the potential of introgressive hybridization to rapidly give rise to new evolutionarily and ecologically significant units in the marine realm. In the south of the Danish straits and in the Southern Baltic Sea, a broad genetic transition zone links the pure North Sea subspecies M. balthica rubra to the inner Baltic hybrid swarm, which has about 60 % of Pacific contribution in its genome. This transition zone has no regular smooth clinal structure, but its populations show strong genotypic disequilibria typical of a hybrid zone maintained by the interplay of selection and gene flow by dispersing pelagic larvae. The structure of the genetic transition is partly in line with features of Baltic water circulation and salinity stratification, with greater penetration of Atlantic genes on the Baltic south coast and in deeper water populations. In all, the scenarios of historical isolation and secondary contact that arise from the phylogeographic studies of both Macoma and Cerastoderma shed light to the more general but enigmatic patterns seen in marine phylogeography, where deep genetic breaks are often seen in species with high dispersal potential.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esophageal and gastroesophageal junction (GEJ) adenocarcinoma is rapidly increasing disease with a pathophysiology connected to oxidative stress. Exact pre-treatment clinical staging is essential for optimal care of this lethal malignancy. The cost-effectiviness of treatment is increasingly important. We measured oxidative metabolism in the distal and proximal esophagus by myeloperoxidase activity (MPA), glutathione content (GSH), and superoxide dismutase (SOD) in 20 patients operated on with Nissen fundoplication and 9 controls during a 4-year follow-up. Further, we assessed the oxidative damage of DNA by 8-hydroxydeoxyguanosine (8-OHdG) in esophageal samples of subjects (13 Barrett s metaplasia, 6 Barrett s esophagus with high-grade dysplasia, 18 adenocarcinoma of the distal esophagus/GEJ, and 14 normal controls). We estimated the accuracy (42 patients) and preoperative prognostic value (55 patients) of PET compared with computed tomography (CT) and endoscopic ultrasound (EUS) in patients with adenocarcinoma of the esophagus/GEJ. Finally, we clarified the specialty-related costs and the utility of either radical (30 patients) or palliative (23 patients) treatment of esophageal/GEJ carcinoma by the 15 D health-related quality-of-life (HRQoL) questionnaire and the survival rate. The cost-utility of radical treatment of esophageal/GEJ carcinoma was investigated using a decision tree analysis model comparing radical, palliative, and hypothetical new treatment. We found elevated oxidative stress ( measured by MPA) and decreased antioxidant defense (measured by GSH) after antireflux surgery. This indicates that antireflux surgery is not a perfect solution for oxidative stress of the esophageal mucosa. Elevated oxidative stress in turn may partly explain why adenocarcinoma of the distal esophagus is found even after successful fundoplication. In GERD patients, proximal esophageal mucosal anti-oxidative defense seems to be defective before and even years after successful antireflux surgery. In addition, antireflux surgery apparently does not change the level of oxidative stress in the proximal esophagus, suggesting that defective mucosal anti-oxidative capacity plays a role in development of oxidative damage to the esophageal mucosa in GERD. In the malignant transformation of Barrett s esophagus an important component appears to be oxidative stress. DNA damage may be mediated by 8-OHdG, which we found to be increased in Barrett s epithelium and in high-grade dysplasia as well as in adenocarcinoma of the esophagus/GEJ compared with controls. The entire esophagus of Barrett s patients suffers from increased oxidative stress ( measured by 8-OhdG). PET is a useful tool in the staging and prognostication of adenocarcinoma of the esophagus/GEJ detecting organ metastases better than CT, although its accuracy in staging of paratumoral and distant lymph nodes is limited. Radical surgery for esophageal/GEJ carcinoma provides the greatest benefit in terms of survival, and its cost-utility appears to be the best of currently available treatments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protein conformations and dynamics can be studied by nuclear magnetic resonance spectroscopy using dilute liquid crystalline samples. This work clarifies the interpretation of residual dipolar coupling data yielded by the experiments. It was discovered that unfolded proteins without any additional structure beyond that of a mere polypeptide chain exhibit residual dipolar couplings. Also, it was found that molecular dynamics induce fluctuations in the molecular alignment and doing so affect residual dipolar couplings. The finding clarified the origins of low order parameter values observed earlier. The work required the development of new analytical and computational methods for the prediction of intrinsic residual dipolar coupling profiles for unfolded proteins. The presented characteristic chain model is able to reproduce the general trend of experimental residual dipolar couplings for denatured proteins. The details of experimental residual dipolar coupling profiles are beyond the analytical model, but improvements are proposed to achieve greater accuracy. A computational method for rapid prediction of unfolded protein residual dipolar couplings was also developed. Protein dynamics were shown to modulate the effective molecular alignment in a dilute liquid crystalline medium. The effects were investigated from experimental and molecular dynamics generated conformational ensembles of folded proteins. It was noted that dynamics induced alignment is significant especially for the interpretation of molecular dynamics in small, globular proteins. A method of correction was presented. Residual dipolar couplings offer an attractive possibility for the direct observation of protein conformational preferences and dynamics. The presented models and methods of analysis provide significant advances in the interpretation of residual dipolar coupling data from proteins.