17 resultados para optimal solution
em Helda - Digital Repository of University of Helsinki
Resumo:
Climate change is the single biggest environmental problem in the world at the moment. Although the effects are still not fully understood and there is considerable amount of uncertainty, many na-tions have decided to mitigate the change. On the societal level, a planner who tries to find an eco-nomically optimal solution to an environmental pollution problem seeks to reduce pollution from the sources where reductions are most cost-effective. This study aims to find out how effective the instruments of the agricultural policy are in the case of climate change mitigation in Finland. The theoretical base of this study is the neoclassical economic theory that is based on the assumption of a rational economic agent who maximizes his own utility. This theoretical base has been widened towards the direction clearly essential to the matter: the theory of environmental eco-nomics. Deeply relevant to this problem and central in the theory of environmental economics are the concepts of externalities and public goods. What are also relevant are the problems of global pollution and non-point-source pollution. Econometric modelling was the method that was applied to this study. The Finnish part of the AGMEMOD-model, covering the whole EU, was used for the estimation of the development of pollution. This model is a seemingly recursive, partially dynamic partial-equilibrium model that was constructed to predict the development of Finnish agricultural production of the most important products. For the study, I personally updated the model and also widened its scope in some relevant matters. Also, I devised a table that can calculate the emissions of greenhouse gases according to the rules set by the IPCC. With the model I investigated five alternative scenarios in comparison to the base-line scenario of Agenda 2000 agricultural policy. The alternative scenarios were: 1) the CAP reform of 2003, 2) free trade on agricultural commodities, 3) technological change, 4) banning the cultivation of organic soils and 5) the combination of the last three scenarios as the maximal achievement in reduction. The maximal achievement in the alternative scenario 5 was 1/3 of the level achieved on the base-line scenario. CAP reform caused only a minor reduction when com-pared to the base-line scenario. Instead, the free trade scenario and the scenario of technological change alone caused a significant reduction. The biggest single reduction was achieved by banning the cultivation of organic land. However, this was also the most questionable scenario to be real-ized, the reasons for this are further elaborated in the paper. The maximal reduction that can be achieved in the Finnish agricultural sector is about 11 % of the emission reduction that is needed to comply with the Kyoto protocol.
Resumo:
The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.
Resumo:
Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.
Resumo:
World marine fisheries suffer from economic and biological overfishing: too many vessels are harvesting too few fish stocks. Fisheries economics has explained the causes of overfishing and provided a theoretical background for management systems capable of solving the problem. Yet only a few examples of fisheries managed by the principles of the bioeconomic theory exist. With the aim of bridging the gap between the actual fish stock assessment models used to provide management advice and economic optimisation models, the thesis explores economically sound harvesting from national and international perspectives. Using data calibrated for the Baltic salmon and herring stocks, optimal harvesting policies are outlined using numerical methods. First, the thesis focuses on the socially optimal harvest of a single salmon stock by commercial and recreational fisheries. The results obtained using dynamic programming show that the optimal fishery configuration would be to close down three out of the five studied fisheries. The result is robust to stock size fluctuations. Compared to a base case situation, the optimal fleet structure would yield a slight decrease in the commercial catch, but a recreational catch that is nearly seven times higher. As a result, the expected economic net benefits from the fishery would increase nearly 60%, and the expected number of juvenile salmon (smolt) would increase by 30%. Second, the thesis explores the management of multiple salmon stocks in an international framework. Non-cooperative and cooperative game theory are used to demonstrate different "what if" scenarios. The results of the four player game suggest that, despite the commonly agreed fishing quota, the behaviour of the countries has been closer to non-cooperation than cooperation. Cooperation would more than double the net benefits from the fishery compared to a past fisheries policy. Side payments, however, are a prerequisite for a cooperative solution. Third, the thesis applies coalitional games in the partition function form to study whether the cooperative solution would be stable despite the potential presence of positive externalities. The results show that the cooperation of two out of four studied countries can be stable. Compared to a past fisheries policy, a stable coalition structure would provide substantial economic benefits. Nevertheless, the status of the salmon stocks would not improve significantly. Fourth, the thesis studies the prerequisites for and potential consequences of the implementation of an individual transferable quota (ITQ) system in the Finnish herring fishery. Simulation results suggest that ITQs would result in a decrease in the number of fishing vessels, but enables positive profits to overlap with a higher stock size. The empirical findings of the thesis affirm that the profitability of the studied fisheries could be improved. The evidence, however, indicates that incentives for free riding exist, and thus the most preferable outcome both in economic and biological terms is elusive.
Resumo:
NMR spectroscopy enables the study of biomolecules from peptides and carbohydrates to proteins at atomic resolution. The technique uniquely allows for structure determination of molecules in solution-state. It also gives insights into dynamics and intermolecular interactions important for determining biological function. Detailed molecular information is entangled in the nuclear spin states. The information can be extracted by pulse sequences designed to measure the desired molecular parameters. Advancement of pulse sequence methodology therefore plays a key role in the development of biomolecular NMR spectroscopy. A range of novel pulse sequences for solution-state NMR spectroscopy are presented in this thesis. The pulse sequences are described in relation to the molecular information they provide. The pulse sequence experiments represent several advances in NMR spectroscopy with particular emphasis on applications for proteins. Some of the novel methods are focusing on methyl-containing amino acids which are pivotal for structure determination. Methyl-specific assignment schemes are introduced for increasing the size range of 13C,15N labeled proteins amenable to structure determination without resolving to more elaborate labeling schemes. Furthermore, cost-effective means are presented for monitoring amide and methyl correlations simultaneously. Residual dipolar couplings can be applied for structure refinement as well as for studying dynamics. Accurate methods for measuring residual dipolar couplings in small proteins are devised along with special techniques applicable when proteins require high pH or high temperature solvent conditions. Finally, a new technique is demonstrated to diminish strong-coupling induced artifacts in HMBC, a routine experiment for establishing long-range correlations in unlabeled molecules. The presented experiments facilitate structural studies of biomolecules by NMR spectroscopy.
Resumo:
The purpose of this study is to describe the development of application of mass spectrometry for the structural analyses of non-coding ribonucleic acids during past decade. Mass spectrometric methods are compared of traditional gel electrophoretic methods, the characteristics of performance of mass spectrometric, analyses are studied and the future trends of mass spectrometry of ribonucleic acids are discussed. Non-coding ribonucleic acids are short polymeric biomolecules which are not translated to proteins, but which may affect the gene expression in all organisms. Regulatory ribonucleic acids act through transient interactions with key molecules in signal transduction pathways. Interactions are mediated through specific secondary and tertiary structures. Posttranscriptional modifications in the structures of molecules may introduce new properties to the organism, such as adaptation to environmental changes or development of resistance to antibiotics. In the scope of this study, the structural studies include i) determination of the sequence of nucleobases in the polymer chain, ii) characterisation and localisation of posttranscriptional modifications in nucleobases and in the backbone structure, iii) identification of ribonucleic acid-binding molecules and iv) probing of higher order structures in the ribonucleic acid molecule. Bacteria, archaea, viruses and HeLa cancer cells have been used as target organisms. Synthesised ribonucleic acids consisting of structural regions of interest have been frequently used. Electrospray ionisation (ESI) and matrix-assisted laser desorption ionisation (MALDI) have been used for ionisation of ribonucleic analytes. Ammonium acetate and 2-propanol are common solvents for ESI. Trihydroxyacetophenone is the optimal MALDI matrix for ionisation of ribonucleic acids and peptides. Ammonium salts are used in ESI buffers and MALDI matrices as additives to remove cation adducts. Reverse phase high performance liquid chromatography has been used for desalting and fractionation of analytes either off-line of on-line, coupled with ESI source. Triethylamine and triethylammonium bicarbonate are used as ion pair reagents almost exclusively. Fourier transform ion cyclotron resonance analyser using ESI coupled with liquid chromatography is the platform of choice for all forms of structural analyses. Time-of-flight (TOF) analyser using MALDI may offer sensitive, easy-to-use and economical solution for simple sequencing of longer oligonucleotides and analyses of analyte mixtures without prior fractionation. Special analysis software is used for computer-aided interpretation of mass spectra. With mass spectrometry, sequences of 20-30 nucleotides of length may be determined unambiguously. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Sequencing in conjunction with other structural studies enables accurate localisation and characterisation of posttranscriptional modifications and identification of nucleobases and amino acids at the sites of interaction. High throughput screening methods for RNA-binding ligands have been developed. Probing of the higher order structures has provided supportive data for computer-generated three dimensional models of viral pseudoknots. In conclusion. mass spectrometric methods are well suited for structural analyses of small species of ribonucleic acids, such as short non-coding ribonucleic acids in the molecular size region of 20-30 nucleotides. Structural information not attainable with other methods of analyses, such as nuclear magnetic resonance and X-ray crystallography, may be obtained with the use of mass spectrometry. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Ligand screening may be used in the search of possible new therapeutic agents. Demanding assay design and challenging interpretation of data requires multidisclipinary knowledge. The implement of mass spectrometry to structural studies of ribonucleic acids is probably most efficiently conducted in specialist groups consisting of researchers from various fields of science.
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Economic and Monetary Union can be characterised as a complicated set of legislation and institutions governing monetary and fiscal responsibilities. The measures of fiscal responsibility are to be guided by the Stability and Growth Pact, which sets rules for fiscal policy and makes a discretionary fiscal policy virtually impossible. To analyse the effects of the fiscal and monetary policy mix, we modified the New Keynesian framework to allow for supply effects of fiscal policy. We show that defining a supply-side channel for fiscal policy using an endogenous output gap changes the stabilising properties of monetary policy rules. The stability conditions are affected by fiscal policy, so that the dichotomy between active (passive) monetary policy and passive (active) fiscal policy as stabilising regimes does not hold, and it is possible to have an active monetary - active fiscal policy regime consistent with dynamical stability of the economy. We show that, if we take supply-side effects into ac-count, we get more persistent inflation and output reactions. We also show that the dichotomy does not hold for a variety of different fiscal policy rules based on government debt and budget deficit, using the tax smoothing hypothesis and formulating the tax rules as difference equations. The debt rule with active monetary policy results in indeterminacy, while the deficit rule produces a determinate solution with active monetary policy, even with active fiscal policy. The combination of fiscal requirements in a rule results in cyclical responses to shocks. The amplitude of the cycle is larger with more weight on debt than on deficit. Combining optimised monetary policy with fiscal policy rules means that, under a discretionary monetary policy, the fiscal policy regime affects the size of the inflation bias. We also show that commitment to an optimal monetary policy not only corrects the inflation bias but also increases the persistence of output reactions. With fiscal policy rules based on the deficit we can retain the tax smoothing hypothesis also in a sticky price model.
Resumo:
Esophageal and gastroesophageal junction (GEJ) adenocarcinoma is rapidly increasing disease with a pathophysiology connected to oxidative stress. Exact pre-treatment clinical staging is essential for optimal care of this lethal malignancy. The cost-effectiviness of treatment is increasingly important. We measured oxidative metabolism in the distal and proximal esophagus by myeloperoxidase activity (MPA), glutathione content (GSH), and superoxide dismutase (SOD) in 20 patients operated on with Nissen fundoplication and 9 controls during a 4-year follow-up. Further, we assessed the oxidative damage of DNA by 8-hydroxydeoxyguanosine (8-OHdG) in esophageal samples of subjects (13 Barrett s metaplasia, 6 Barrett s esophagus with high-grade dysplasia, 18 adenocarcinoma of the distal esophagus/GEJ, and 14 normal controls). We estimated the accuracy (42 patients) and preoperative prognostic value (55 patients) of PET compared with computed tomography (CT) and endoscopic ultrasound (EUS) in patients with adenocarcinoma of the esophagus/GEJ. Finally, we clarified the specialty-related costs and the utility of either radical (30 patients) or palliative (23 patients) treatment of esophageal/GEJ carcinoma by the 15 D health-related quality-of-life (HRQoL) questionnaire and the survival rate. The cost-utility of radical treatment of esophageal/GEJ carcinoma was investigated using a decision tree analysis model comparing radical, palliative, and hypothetical new treatment. We found elevated oxidative stress ( measured by MPA) and decreased antioxidant defense (measured by GSH) after antireflux surgery. This indicates that antireflux surgery is not a perfect solution for oxidative stress of the esophageal mucosa. Elevated oxidative stress in turn may partly explain why adenocarcinoma of the distal esophagus is found even after successful fundoplication. In GERD patients, proximal esophageal mucosal anti-oxidative defense seems to be defective before and even years after successful antireflux surgery. In addition, antireflux surgery apparently does not change the level of oxidative stress in the proximal esophagus, suggesting that defective mucosal anti-oxidative capacity plays a role in development of oxidative damage to the esophageal mucosa in GERD. In the malignant transformation of Barrett s esophagus an important component appears to be oxidative stress. DNA damage may be mediated by 8-OHdG, which we found to be increased in Barrett s epithelium and in high-grade dysplasia as well as in adenocarcinoma of the esophagus/GEJ compared with controls. The entire esophagus of Barrett s patients suffers from increased oxidative stress ( measured by 8-OhdG). PET is a useful tool in the staging and prognostication of adenocarcinoma of the esophagus/GEJ detecting organ metastases better than CT, although its accuracy in staging of paratumoral and distant lymph nodes is limited. Radical surgery for esophageal/GEJ carcinoma provides the greatest benefit in terms of survival, and its cost-utility appears to be the best of currently available treatments.