29 resultados para Optimal solutions

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and aims. Fatness and dieting have been the object of interest between many fields for a long time. Home economics as a discipline enables a comprehensive inspection of fatness and dieting reviewing different disciplines. In addition to the aspect where the pursuit of dieting and health is seen from the perspective of medical and health science it is also been reviewd as a social and cultural phenomena. This study contemplates the influence of history, religion, medicalization and media on dieting and health culture. The objective is to find out if the modern dieting and health culture has gathered influences from centuries ago and absorbed religious features. The stress deriving from appereance has been discussed in the public and there are many solutions conserning weight issues. The purpose of this study is to find out what personal experiences and thoughts female pastors have conserning these questions. The media – which is one of the most influential systems nowadays – has undeniably a great effect on the consumer. The goal is furthermore to estimate the effect of the media on the changing of dieting and health culture. The three main research questions are: 1. What kind of conseptions do female pastors have of dieting and health culture and of its religious features? 2. What kind of personal experiences and conseptions do female pastors have of dieting and strivines of health? 3. How do female pastors regard the image the media has supplied of dieting and health culture? Material and methods. The qualitative data was gathered in year 2009 using the halfstructured theme interview -method. The data consists of interviews conducted with specialists of spiritual matters, i.e. ten female pastors who are between 35 and 60 years old and live in the metropolitan area. The analytical procedure used is called a theory based context analysis. Results and conclusions. Results of this study show that the idealization of slimness and healthiness is a matter discussed in the public on a daily basis. The problem faced was that the media provided contradictory information regarding fatness and dieting and the standard of slimness in commercials focused on females. The pursuit of dieting and healthiness was believed to include also religious elements. In the Middle Ages and the era after that the fatness, overeating and the pleasure one gets from eating was still seen as a condemnable matter in our culture. One could say this was like a sin. The respondents believed that healthiness, healthy living, optimal eating and good looks were a matter more or less equal than a religion. This was a derivative from the fact that treasuring health has become a life stearing value for many people. In the priest’s profession dieting and the pursuit of health was seen in the light of problems arising from weight issues. In ones profession for example the unhealthy eating in festive situations was seen as a matter that leads to unnecessary weight. Another aspect was the job circumstances that limited the degree of movement. The belief was that the female pastors would in a decreasing fashion confront stress deriving from appearence in their job. Keywords: dieting, fatness, healthiness, slimness, female pastors, religion, medicalization, media

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mobile RFID services for the Internet of Things can be created by using RFID as an enabling technology in mobile devices. Humans, devices, and things are the content providers and users of these services. Mobile RFID services can be either provided on mobile devices as stand-alone services or combined with end-to-end systems. When different service solution scenarios are considered, there are more than one possible architectural solution in the network, mobile, and back-end server areas. Combining the solutions wisely by applying the software architecture and engineering principles, a combined solution can be formulated for certain application specific use cases. This thesis illustrates these ideas. It also shows how generally the solutions can be used in real world use case scenarios. A case study is used to add further evidence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is concerned with presenting a modified theoretical approach to the study of centre-periphery relations in the Russian Federation. In the widely accepted scientific discourse, the Russian federal system under the Yeltsin Administration (1991-2000) was asymmetrical; largely owing to the varying amount of structural autonomy distributed among the federation s 89 constituent units. While providing an improved understanding as to which political and socio-economic structures contributed to federal asymmetry, it is felt that associated large N-studies have underemphasised the role played by actor agency in re-shaping Russian federal institutions. It is the main task of this thesis to reintroduce /re-emphasise the importance of actor agency as a major contributing element of institutional change in the Russian federal system. By focusing on the strategic agency of regional elites simultaneously within regional and federal contexts, the thesis adopts the position that political, ethnic and socio-economic structural factors alone cannot fully determine the extent to which regional leaders were successful in their pursuit of economic and political pay-offs from the institutionally weakened federal centre. Furthermore, this work hypothesises that under conditions of federal institutional uncertainty, it is the ability of regional leaders to simultaneously interpret various mutable structural conditions then translate them into plausible strategies which accounts for the regions ability to extract variable amounts of economic and political pay-offs from the Russian federal system. The thesis finds that while the hypothesis is accurate in its theoretical assumptions, several key conclusions provide paths for further inquiry posed by the initial research question. First, without reliable information or stable institutions to guide their actions, both regional and federal elites were forced into ad-hoc decision-making in order to maintain their core strategic focus: political survival. Second, instead of attributing asymmetry to either actor agency or structural factors exclusively, the empirical data shows that both agency and structures interact symbiotically in the strategic formulation process, thus accounting for the sub-optimal nature of several of the actions taken in the adopted cases. Third, as actor agency and structural factors mutate over time, so, too do the perceived payoffs from elite competition. In the case of the Russian federal system, the stronger the federal centre became, the less likely it was that regional leaders could extract the high degree of economic and political pay-offs that they clamoured for earlier in the Yeltsin period. Finally, traditional approaches to the study of federal systems which focus on institutions as measures of federalism are not fully applicable in the Russian case precisely because the institutions themselves were a secondary point of contention between competing elites. Institutional equilibriums between the regions and Moscow were struck only when highly personalised elite preferences were satisfied. Therefore the Russian federal system is the product of short-term, institutional solutions suited to elite survival strategies developed under conditions of economic, political and social uncertainty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrophobins are a group of particularly surface active proteins. The surface activity is demonstrated in the ready adsorption of hydrophobins to hydrophobic/hydrophilic interfaces such as the air/water interface. Adsorbed hydrophobins self-assemble into ordered films, lower the surface tension of water, and stabilize air bubbles and foams. Hydrophobin proteins originate from filamentous fungi. In the fungi the adsorbed hydrophobin films enable the growth of fungal aerial structures, form protective coatings and mediate the attachment of fungi to solid surfaces. This thesis focuses on hydrophobins HFBI, HFBII, and HFBIII from a rot fungus Trichoderma reesei. The self-assembled hydrophobin films were studied both at the air/water interface and on a solid substrate. In particular, using grazing-incidence x-ray diffraction and reflectivity, it was possible to characterize the hydrophobin films directly at the air/water interface. The in situ experiments yielded information on the arrangement of the protein molecules in the films. All the T. reesei hydrophobins were shown to self-assemble into highly crystalline, hexagonally ordered rafts. The thicknesses of these two-dimensional protein crystals were below 30 Å. Similar films were also obtained on silicon substrates. The adsorption of the proteins is likely to be driven by the hydrophobic effect, but the self-assembly into ordered films involves also specific protein-protein interactions. The protein-protein interactions lead to differences in the arrangement of the molecules in the HFBI, HFBII, and HFBIII protein films, as seen in the grazing-incidence x-ray diffraction data. The protein-protein interactions were further probed in solution using small-angle x-ray scattering. Both HFBI and HFBII were shown to form mainly tetramers in aqueous solution. By modifying the solution conditions and thereby the interactions, it was shown that the association was due to the hydrophobic effect. The stable tetrameric assemblies could tolerate heating and changes in pH. The stability of the structure facilitates the persistence of these secreted proteins in the soil.