23 resultados para methodologies

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis focuses on the social interaction and behavior of the homeless living in Tokyo's Taito Ward. The study is based on the author's own ethnographic field research carried out in the autumn 2003. The chosen methodologies were based on the methodology called "participant observation", and they were used depending on the context. The ethnographic field research was carried out from the mid-August to the beginning of the October in 2003. The most important targets of the research were three separate loosely knit groups placed in certain parts of Taito Ward. One of these groups was based in proximity to the Ueno train station, one group gathered every morning around a homeless support organization called San'yûkai, and one was based in Tamahime Park located in the old San'ya area of Tokyo. The analysis is based on the aspects of Takie Sugiyama Lebra's theory of "social relativism". Lebra's theory consists of the following, arguably universal aspects: belongingness, empathy, dependence, place in the society, and reciprocity. In addition, all the interaction and behavior is tied to the context and the situation. According to Lebra, ritual and intimate situations produce similar action, which is socially relative. Of these, the norms of the ritual behavior are more regulated, while the intimate bahavior is less spontaneous. On the contrary, an anomic situation produces anomic behavior, which is not socially relative. Lebra's theory is critically reviewed by the author of the thesis, and the author has attempted to modify the theory to make it more adaptable to the present-day society and to the analysis. Erving Goffman's views of the social interaction and Anthony Giddens' theories about the social structures have been used as complementary thoretical basis. The aim of the thesis is to clarify, how and why the interaction and the behavior of some homeless individuals in some situations follow the aspects of Lebra's "social relativism", and on the other hand, why in some situations they do not. In the latter cases the answers can be sought from regional and individual differences, or from the inaptness of the theory to analyze the presented situation. Here, a significant factor is the major finding of the field study: the so called "homeless etiquette", which is an abstract set of norms and values that influences the social interaction and behavior of the homeless, and with which many homeless individuals presented in the study complied. The fundamental goal of the thesis is to reach profound understanding about the daily life of the homeless, whose lives were studied. The author argues that this kind of profound understanding is necessary in looking for sustainable solutions in the areas of social and housing policy to improve the position of the homeless and the qualitative functioning of the society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A population-based early detection program for breast cancer has been in progress in Finland since 1987. According to regulations during the study period 1987-2001, free of charge mammography screening was offered every second year to women aged 50-59 years. Recently, the screening service was decided to be extended to age group 50-69. However, the scope of the program is still frequently discussed in public and information about potential impacts of mass-screening practice changes on future breast cancer burden is required. The aim of this doctoral thesis is to present methodologies for taking into account the mass-screening invitation information in breast cancer burden predictions, and to present alternative breast cancer incidence and mortality predictions up to 2012 based on scenarios of the future screening policy. The focus of this work is not on assessing the absolute efficacy but the effectiveness of mass-screening, and, by utilizing the data on invitations, on showing the estimated impacts of changes in an existing screening program on the short-term predictions. The breast cancer mortality predictions are calculated using a model that combines incidence, cause-specific and other cause survival on individual level. The screening invitation data are incorporated into modeling of breast cancer incidence and survival by dividing the program into separate components (first and subsequent rounds and years within them, breaks, and post screening period) and defining a variable that gives the component of the screening program. The incidence is modeled using a Poisson regression approach and the breast cancer survival by applying a parametric mixture cure model, where the patient population is allowed to be a combination of cured and uncured patients. The patients risk to die from other causes than breast cancer is allowed to differ from that of a corresponding general population group and to depend on age and follow-up time. As a result, the effects of separate components of the screening program on incidence, proportion of cured and the survival of the uncured are quantified. According to the predictions, the impacts of policy changes, like extending the program from age group 50-59 to 50-69, are clearly visible on incidence while the effects on mortality in age group 40-74 are minor. Extending the screening service would increase the incidence of localized breast cancers but decrease the rates of non-localized breast cancer. There were no major differences between mortality predictions yielded by alternative future scenarios of the screening policy: Any policy change would have at the most a 3.0% reduction on overall breast cancer mortality compared to continuing the current practice in the near future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Baltic countries share public health problems typical of most Eastern European transition economies: morbidity and mortality from non-communicable diseases is higher than in Western European countries. This situation has many similarities compared to a neighbouring country, Finland during the late 1960s. There are reasons to expect that health disadvantage may be increasing among the less advantaged population groups in the Baltic countries. The evidence on social differences in health in the Baltic countries is, however, scattered to studies using different methodologies making comparisons difficult. This study aims to bridge the evidence gap by providing comparable standardized cross-sectional and time trend analyses to the social patterning of variation in health and two key health behaviours i.e. smoking and drinking in Estonia, Latvia, Lithuania and Finland in 1994-2004 representing Eastern European transition countries and a stable Western European country. The data consisted of similar cross-sectional postal surveys conducted in 1994, 1996, 1998, 2000, 2002 and 2004 on adult populations (aged 20 64 years) in Estonia (n=9049), Latvia (n=7685), Lithuania (n=11634) and Finland (n=18821) in connection with the Finbalt Health Monitor project. The main statistical method was logistic regression analysis. Perceived health was found to be worse among both men and women in the Baltic countries than in Finland. Poor health was associated with older age and lower education in all countries studied. Urbanization and marital status were not consistently related to health. The existing educational inequalities in health remained generally stable over time from 1994 to 2004. In the Baltic countries, however, improvement in perceived health was mainly found among the better educated men and women. Daily smoking was associated with young age, lower education and psychological distress in all countries. Among women smoking was also associated with urbanisation in all countries except Estonia. Among Lithuanian women, the educational gradient in smoking was weakest, and the overall prevalence of smoking increased over time. Drinking was generally associated with young age among men and women, and with education among women. Better educated women were more often frequent drinkers and less educated binge drinkers. The exception was that in Latvian men and women both frequent drinking and binge drinking were associated with low education. In conclusion, the Baltic countries are likely to resemble Western European countries rather than other transition societies. While health inequalities did not markedly change, substantial inequalities do remain, and there were indications of favourable developments mainly among the better educated. Pressures towards increasing health inequalities may therefore be visible in the future, which would be in accordance with the results on smoking and drinking in this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, novel methodologies for the determination of antioxidative compounds in herbs and beverages were developed. Antioxidants are compounds that can reduce, delay or inhibit oxidative events. They are a part of the human defense system and are obtained through the diet. Antioxidants are naturally present in several types of foods, e.g. in fruits, beverages, vegetables and herbs. Antioxidants can also be added to foods during manufacturing to suppress lipid oxidation and formation of free radicals under conditions of cooking or storage and to reduce the concentration of free radicals in vivo after food ingestion. There is growing interest in natural antioxidants, and effective compounds have already been identified from antioxidant classes such as carotenoids, essential oils, flavonoids and phenolic acids. The wide variety of sample matrices and analytes presents quite a challenge for the development of analytical techniques. Growing demands have been placed on sample pretreatment. In this study, three novel extraction techniques, namely supercritical fluid extraction (SFE), pressurised hot water extraction (PHWE) and dynamic sonication-assisted extraction (DSAE) were studied. SFE was used for the extraction of lycopene from tomato skins and PHWE was used in the extraction of phenolic compounds from sage. DSAE was applied to the extraction of phenolic acids from Lamiaceae herbs. In the development of extraction methodologies, the main parameters of the extraction were studied and the recoveries were compared to those achieved by conventional extraction techniques. In addition, the stability of lycopene was also followed under different storage conditions. For the separation of the antioxidative compounds in the extracts, liquid chromatographic methods (LC) were utilised. Two novel LC techniques, namely ultra performance liquid chromatography (UPLC) and comprehensive two-dimensional liquid chromatography (LCxLC) were studied and compared with conventional high performance liquid chromatography (HPLC) for the separation of antioxidants in beverages and Lamiaceae herbs. In LCxLC, the selection of LC mode, column dimensions and flow rates were studied and optimised to obtain efficient separation of the target compounds. In addition, the separation powers of HPLC, UPLC, HPLCxHPLC and HPLCxUPLC were compared. To exploit the benefits of an integrated system, in which sample preparation and final separation are performed in a closed unit, dynamic sonication-assisted extraction was coupled on-line to a liquid chromatograph via a solid-phase trap. The increased sensitivity was utilised in the extraction of phenolic acids from Lamiaceae herbs. The results were compared to those of achieved by the LCxLC system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Elucidating the mechanisms responsible for the patterns of species abundance, diversity, and distribution within and across ecological systems is a fundamental research focus in ecology. Species abundance patterns are shaped in a convoluted way by interplays between inter-/intra-specific interactions, environmental forcing, demographic stochasticity, and dispersal. Comprehensive models and suitable inferential and computational tools for teasing out these different factors are quite limited, even though such tools are critically needed to guide the implementation of management and conservation strategies, the efficacy of which rests on a realistic evaluation of the underlying mechanisms. This is even more so in the prevailing context of concerns over climate change progress and its potential impacts on ecosystems. This thesis utilized the flexible hierarchical Bayesian modelling framework in combination with the computer intensive methods known as Markov chain Monte Carlo, to develop methodologies for identifying and evaluating the factors that control the structure and dynamics of ecological communities. These methodologies were used to analyze data from a range of taxa: macro-moths (Lepidoptera), fish, crustaceans, birds, and rodents. Environmental stochasticity emerged as the most important driver of community dynamics, followed by density dependent regulation; the influence of inter-specific interactions on community-level variances was broadly minor. This thesis contributes to the understanding of the mechanisms underlying the structure and dynamics of ecological communities, by showing directly that environmental fluctuations rather than inter-specific competition dominate the dynamics of several systems. This finding emphasizes the need to better understand how species are affected by the environment and acknowledge species differences in their responses to environmental heterogeneity, if we are to effectively model and predict their dynamics (e.g. for management and conservation purposes). The thesis also proposes a model-based approach to integrating the niche and neutral perspectives on community structure and dynamics, making it possible for the relative importance of each category of factors to be evaluated in light of field data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many species inhabit fragmented landscapes, resulting either from anthropogenic or from natural processes. The ecological and evolutionary dynamics of spatially structured populations are affected by a complex interplay between endogenous and exogenous factors. The metapopulation approach, simplifying the landscape to a discrete set of patches of breeding habitat surrounded by unsuitable matrix, has become a widely applied paradigm for the study of species inhabiting highly fragmented landscapes. In this thesis, I focus on the construction of biologically realistic models and their parameterization with empirical data, with the general objective of understanding how the interactions between individuals and their spatially structured environment affect ecological and evolutionary processes in fragmented landscapes. I study two hierarchically structured model systems, which are the Glanville fritillary butterfly in the Åland Islands, and a system of two interacting aphid species in the Tvärminne archipelago, both being located in South-Western Finland. The interesting and challenging feature of both study systems is that the population dynamics occur over multiple spatial scales that are linked by various processes. My main emphasis is in the development of mathematical and statistical methodologies. For the Glanville fritillary case study, I first build a Bayesian framework for the estimation of death rates and capture probabilities from mark-recapture data, with the novelty of accounting for variation among individuals in capture probabilities and survival. I then characterize the dispersal phase of the butterflies by deriving a mathematical approximation of a diffusion-based movement model applied to a network of patches. I use the movement model as a building block to construct an individual-based evolutionary model for the Glanville fritillary butterfly metapopulation. I parameterize the evolutionary model using a pattern-oriented approach, and use it to study how the landscape structure affects the evolution of dispersal. For the aphid case study, I develop a Bayesian model of hierarchical multi-scale metapopulation dynamics, where the observed extinction and colonization rates are decomposed into intrinsic rates operating specifically at each spatial scale. In summary, I show how analytical approaches, hierarchical Bayesian methods and individual-based simulations can be used individually or in combination to tackle complex problems from many different viewpoints. In particular, hierarchical Bayesian methods provide a useful tool for decomposing ecological complexity into more tractable components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For achieving efficient fusion energy production, the plasma-facing wall materials of the fusion reactor should ensure long time operation. In the next step fusion device, ITER, the first wall region facing the highest heat and particle load, i.e. the divertor area, will mainly consist of tiles based on tungsten. During the reactor operation, the tungsten material is slowly but inevitably saturated with tritium. Tritium is the relatively short-lived hydrogen isotope used in the fusion reaction. The amount of tritium retained in the wall materials should be minimized and its recycling back to the plasma must be unrestrained, otherwise it cannot be used for fueling the plasma. A very expensive and thus economically not viable solution is to replace the first walls quite often. A better solution is to heat the walls to temperatures where tritium is released. Unfortunately, the exact mechanisms of hydrogen release in tungsten are not known. In this thesis both experimental and computational methods have been used for studying the release and retention of hydrogen in tungsten. The experimental work consists of hydrogen implantations into pure polycrystalline tungsten, the determination of the hydrogen concentrations using ion beam analyses (IBA) and monitoring the out-diffused hydrogen gas with thermodesorption spectrometry (TDS) as the tungsten samples are heated at elevated temperatures. Combining IBA methods with TDS, the retained amount of hydrogen is obtained as well as the temperatures needed for the hydrogen release. With computational methods the hydrogen-defect interactions and implantation-induced irradiation damage can be examined at the atomic level. The method of multiscale modelling combines the results obtained from computational methodologies applicable at different length and time scales. Electron density functional theory calculations were used for determining the energetics of the elementary processes of hydrogen in tungsten, such as diffusivity and trapping to vacancies and surfaces. Results from the energetics of pure tungsten defects were used in the development of an classical bond-order potential for describing the tungsten defects to be used in molecular dynamics simulations. The developed potential was utilized in determination of the defect clustering and annihilation properties. These results were further employed in binary collision and rate theory calculations to determine the evolution of large defect clusters that trap hydrogen in the course of implantation. The computational results for the defect and trapped hydrogen concentrations were successfully compared with the experimental results. With the aforedescribed multiscale analysis the experimental results within this thesis and found in the literature were explained both quantitatively and qualitatively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to examine how transformation is defining feminist bioethics and to determine the nature of this transformation. Behind the quest for transformation is core feminism and its political implications, namely, that women and other marginalized groups have been given unequal consideration in society and the sciences and that this situation is unacceptable and should be remedied. The goal of the dissertation is to determine how feminist bioethicists integrate the transformation into their respective fields and how they apply the potential of feminism to bioethical theories and practice. On a theoretical level, feminist bioethicists wish to reveal how current ways of knowing are based on inequality. Feminists pay special attention especially to communal and political contexts and to the power relations endorsed by each community. In addition, feminist bioethicists endorse relational ethics, a relational account of the self in which the interconnectedness of persons is important. On the conceptual level, feminist bioethicists work with beliefs, concepts, and practices that give us our world. As an example, I examine how feminist bioethicists have criticized and redefined the concept of autonomy. Feminist bioethicists emphasize relational autonomy, which is based on the conviction that social relationships shape moral identities and values. On the practical level, I discuss stem cell research as a test case for feminist bioethics and its ability to employ its methodologies. Analyzing these perspectives allowed me first, to compare non-feminist and feminist accounts of stem cell ethics and, second, to analyze feminist perspectives on the novel biotechnology. Along with offering a critical evaluation of the stem cell debate, the study shows that sustainable stem cell policies should be grounded on empirical knowledge about how donors perceive stem cell research and the donation process. The study indicates that feminist bioethics should develop the use of empirical bioethics, which takes the nature of ethics seriously: ethical decisions are provisional and open for further consideration. In addition, the study shows that there is another area of development in feminist bioethics: the understanding of (moral) agency. I argue that agency should be understood to mean that actions create desires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Causation is still poorly understood in strategy research, and confusion prevails around key concepts such as competitive advantage. In this paper, we define epistemological conditions that help to dispel some of this confusion and to provide a basis for more developed approaches. In particular, we argue that a counterfactual approach – that builds on a systematic analysis of ‘what-if’ questions – can advance our understanding of key causal mechanisms in strategy research. We offer two concrete methodologies – counterfactual history and causal modeling – as useful solutions. We also show that these methodologies open up new avenues in research on competitive advantage. Counterfactual history can add to our understanding of the context-specific construction of resource-based competitive advantage and path dependence, and causal modeling can help to reconceptualize the relationships between resources and performance. In particular, resource properties can be regarded as mediating mechanisms in these causal relationships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Causation is still poorly understood in strategy research, and confusion prevails around key concepts such as competitive advantage. In this paper, we define epistemological conditions that help to dispel some of this confusion and to provide a basis for more developed approaches. In particular, we argue that a counterfactual approach – that builds on a systematic analysis of ‘what-if’ questions – can advance our understanding of key causal mechanisms in strategy research. We offer two concrete methodologies – counterfactual history and causal modeling – as useful solutions. We also show that these methodologies open up new avenues in research on competitive advantage. Counterfactual history can add to our understanding of the context-specific construction of resource-based competitive advantage and path dependence, and causal modeling can help to reconceptualize the relationships between resources and performance. In particular, resource properties can be regarded as mediating mechanisms in these causal relationships.