13 resultados para Goodness-of-fit test
em Helda - Digital Repository of University of Helsinki
Resumo:
Objectives. The sentence span task is a complex working memory span task used for estimating total working memory capacity for both processing (sentence comprehension) and storage (remembering a set of words). Several traditional models of working memory suggest that performance on these tasks relies on phonological short-term storage. However, long-term memory effects as well as the effects of expertise and strategies have challenged this view. This study uses a working memory task that aids the creation of retrieval structures in the form of stories, which have been shown to form integrated structures in longterm memory. The research question is whether sentence and story contexts boost memory performance in a complex working memory task. The hypothesis is that storage of the words in the task takes place in long-term memory. Evidence of this would be better recall for words as parts of sentences than for separate words, and, particularly, a beneficial effect for words as part of an organized story. Methods. Twenty stories consisting of five sentences each were constructed, and the stimuli in all experimental conditions were based on these sentences and sentence-final words, reordered and recombined for the other conditions. Participants read aloud sets of five sentences that either formed a story or not. In one condition they had to report all the last words at the end of the set, in another, they memorised an additional separate word with each sentence. The sentences were presented on the screen one word at a time (500 ms). After the presentation of each sentence, the participant verified a statement about the sentence. After five sentences, the participant repeated back the words in correct positions. Experiment 1 (n=16) used immediate recall, experiment 2 (n=21) both immediate recall and recall after a distraction interval (the operation span task). In experiment 2 a distracting mental arithmetic task was presented instead of recall in half of the trials, and an individual word was added before each sentence in the two experimental conditions when the participants were to memorize the sentence final words. Subjects also performed a listening span task (in exp.1) or an operation span task (exp.2) to allow comparison of the estimated span and performance in the story task. Results were analysed using correlations, repeated measures ANOVA and a chi-square goodness of fit test on the distribution of errors. Results and discussion. Both the relatedness of the sentences (the story condition) and the inclusion of the words into sentences helped memory. An interaction showed that the story condition had a greater effect on last words than separate words. The beneficial effect of the story was shown in all serial positions. The effects remained in delayed recall. When the sentences formed stories, performance in verification of the statements about sentence context was better. This, as well as the differing distributions of errors in different experimental conditions, suggest different levels of representation are in use in the different conditions. In the story condition, the nature of these representations could be in the form of an organized memory structure, a situation model. The other working memory tasks had only few week correlations to the story task. This could indicate that different processes are in use in the tasks. The results do not support short-term phonological storage, but instead are compatible with the words being encoded to LTM during the task.
Design and testing of stand-specific bucking instructions for use on modern cut-to-length harvesters
Resumo:
This study addresses three important issues in tree bucking optimization in the context of cut-to-length harvesting. (1) Would the fit between the log demand and log output distributions be better if the price and/or demand matrices controlling the bucking decisions on modern cut-to-length harvesters were adjusted to the unique conditions of each individual stand? (2) In what ways can we generate stand and product specific price and demand matrices? (3) What alternatives do we have to measure the fit between the log demand and log output distributions, and what would be an ideal goodness-of-fit measure? Three iterative search systems were developed for seeking stand-specific price and demand matrix sets: (1) A fuzzy logic control system for calibrating the price matrix of one log product for one stand at a time (the stand-level one-product approach); (2) a genetic algorithm system for adjusting the price matrices of one log product in parallel for several stands (the forest-level one-product approach); and (3) a genetic algorithm system for dividing the overall demand matrix of each of the several log products into stand-specific sub-demands simultaneously for several stands and products (the forest-level multi-product approach). The stem material used for testing the performance of the stand-specific price and demand matrices against that of the reference matrices was comprised of 9 155 Norway spruce (Picea abies (L.) Karst.) sawlog stems gathered by harvesters from 15 mature spruce-dominated stands in southern Finland. The reference price and demand matrices were either direct copies or slightly modified versions of those used by two Finnish sawmilling companies. Two types of stand-specific bucking matrices were compiled for each log product. One was from the harvester-collected stem profiles and the other was from the pre-harvest inventory data. Four goodness-of-fit measures were analyzed for their appropriateness in determining the similarity between the log demand and log output distributions: (1) the apportionment degree (index), (2) the chi-square statistic, (3) Laspeyres quantity index, and (4) the price-weighted apportionment degree. The study confirmed that any improvement in the fit between the log demand and log output distributions can only be realized at the expense of log volumes produced. Stand-level pre-control of price matrices was found to be advantageous, provided the control is done with perfect stem data. Forest-level pre-control of price matrices resulted in no improvement in the cumulative apportionment degree. Cutting stands under the control of stand-specific demand matrices yielded a better total fit between the demand and output matrices at the forest level than was obtained by cutting each stand with non-stand-specific reference matrices. The theoretical and experimental analyses suggest that none of the three alternative goodness-of-fit measures clearly outperforms the traditional apportionment degree measure. Keywords: harvesting, tree bucking optimization, simulation, fuzzy control, genetic algorithms, goodness-of-fit
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.
Resumo:
Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.
Resumo:
Type 2 diabetes is an increasing, serious, and costly public health problem. The increase in the prevalence of the disease can mainly be attributed to changing lifestyles leading to physical inactivity, overweight, and obesity. These lifestyle-related risk factors offer also a possibility for preventive interventions. Until recently, proper evidence regarding the prevention of type 2 diabetes has been virtually missing. To be cost-effective, intensive interventions to prevent type 2 diabetes should be directed to people at an increased risk of the disease. The aim of this series of studies was to investigate whether type 2 diabetes can be prevented by lifestyle intervention in high-risk individuals, and to develop a practical method to identify individuals who are at high risk of type 2 diabetes and would benefit from such an intervention. To study the effect of lifestyle intervention on diabetes risk, we recruited 522 volunteer, middle-aged (aged 40 - 64 at baseline), overweight (body mass index > 25 kg/m2) men (n = 172) and women (n = 350) with impaired glucose tolerance to the Diabetes Prevention Study (DPS). The participants were randomly allocated either to the intensive lifestyle intervention group or the control group. The control group received general dietary and exercise advice at baseline, and had annual physician's examination. The participants in the intervention group received, in addition, individualised dietary counselling by a nutritionist. They were also offered circuit-type resistance training sessions and were advised to increase overall physical activity. The intervention goals were to reduce body weight (5% or more reduction from baseline weight), limit dietary fat (< 30% of total energy consumed) and saturated fat (< 10% of total energy consumed), and to increase dietary fibre intake (15 g / 1000 kcal or more) and physical activity (≥ 30 minutes/day). Diabetes status was assessed annually by a repeated 75 g oral glucose tolerance testing. First analysis on end-points was completed after a mean follow-up of 3.2 years, and the intervention phase was terminated after a mean duration of 3.9 years. After that, the study participants continued to visit the study clinics for the annual examinations, for a mean of 3 years. The intervention group showed significantly greater improvement in each intervention goal. After 1 and 3 years, mean weight reductions were 4.5 and 3.5 kg in the intervention group and 1.0 kg and 0.9 kg in the control group. Cardiovascular risk factors improved more in the intervention group. After a mean follow-up of 3.2 years, the risk of diabetes was reduced by 58% in the intervention group compared with the control group. The reduction in the incidence of diabetes was directly associated with achieved lifestyle goals. Furthermore, those who consumed moderate-fat, high-fibre diet achieved the largest weight reduction and, even after adjustment for weight reduction, the lowest diabetes risk during the intervention period. After discontinuation of the counselling, the differences in lifestyle variables between the groups still remained favourable for the intervention group. During the post-intervention follow-up period of 3 years, the risk of diabetes was still 36% lower among the former intervention group participants, compared with the former control group participants. To develop a simple screening tool to identify individuals who are at high risk of type 2 diabetes, follow-up data of two population-based cohorts of 35-64 year old men and women was used. The National FINRISK Study 1987 cohort (model development data) included 4435 subjects, with 182 new drug-treated cases of diabetes identified during ten years, and the FINRISK Study 1992 cohort (model validation data) included 4615 subjects, with 67 new cases of drug-treated diabetes during five years, ascertained using the Social Insurance Institution's Drug register. Baseline age, body mass index, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity and daily consumption of fruits, berries or vegetables were selected into the risk score as categorical variables. In the 1987 cohort the optimal cut-off point of the risk score identified 78% of those who got diabetes during the follow-up (= sensitivity of the test) and 77% of those who remained free of diabetes (= specificity of the test). In the 1992 cohort the risk score performed equally well. The final Finnish Diabetes Risk Score (FINDRISC) form includes, in addition to the predictors of the model, a question about family history of diabetes and the age category of over 64 years. When applied to the DPS population, the baseline FINDRISC value was associated with diabetes risk among the control group participants only, indicating that the intensive lifestyle intervention given to the intervention group participants abolished the diabetes risk associated with baseline risk factors. In conclusion, the intensive lifestyle intervention produced long-term beneficial changes in diet, physical activity, body weight, and cardiovascular risk factors, and reduced diabetes risk. Furthermore, the effects of the intervention were sustained after the intervention was discontinued. The FINDRISC proved to be a simple, fast, inexpensive, non-invasive, and reliable tool to identify individuals at high risk of type 2 diabetes. The use of FINDRISC to identify high-risk subjects, followed by lifestyle intervention, provides a feasible scheme in preventing type 2 diabetes, which could be implemented in the primary health care system.
Resumo:
Thrombin is a multifunctional protease, which has a central role in the development and progression of coronary atherosclerotic lesions and it is a possible mediator of myocardial ischemia-reperfusion injury. Its generation and procoagulant activity are greatly upregulated during cardiopulmonary bypass (CPB). On the other hand, activated protein C, a physiologic anticoagulant that is activated by thrombomodulin-bound thrombin, has been beneficial in various models of ischemia-reperfusion. Therefore, our aim in this study was to test whether thrombin generation or protein C activation during coronary artery bypass grafting (CABG) associate with postoperative myocardial damage or hemodynamic changes. To further investigate the regulation of thrombin during CABG, we tested whether preoperative thrombophilic factors associate with increased CPB-related generation of thrombin or its procoagulant activity. We also measured the anticoagulant effects of heparin during CPB with a novel coagulation test, prothrombinase-induced clotting time (PiCT), and compared the performance of this test with the present standard of laboratory-based anticoagulation monitoring. One hundred patients undergoing elective on-pump CABG were studied prospectively. A progressive increase in markers of thrombin generation (F1+2), fibrinolysis (D-dimer), and fibrin formation (soluble fibrin monomer complexes) was observed during CPB, which was further distinctly propagated by reperfusion after myocardial ischemia, and continued to peak after the neutralization of heparin with protamine. Thrombin generation during reperfusion after CABG associated with postoperative myocardial damage and increased pulmonary vascular resistance. Activated protein C levels increased only slightly during CPB before the release of the aortic clamp, but reperfusion and more significantly heparin neutralization caused a massive increase in activated protein C levels. Protein C activation was clearly delayed in relation to both thrombin generation and fibrin formation. Even though activated protein C associated dynamically with postoperative hemodynamic performance, it did not associate with postoperative myocardial damage. Preoperative thrombophilic variables did not associate with perioperative thrombin generation or its procoagulant activity. Therefore, our results do not favor routine thrombophilia screening before CABG. There was poor agreement between PiCT and other measurements of heparin effects in the setting of CPB. However, lower heparin levels during CPB associated with inferior thrombin control and high heparin levels during CPB associated with fewer perioperative transfusions of blood products. Overall, our results suggest that hypercoagulation after CABG, especially during reperfusion, might be clinically important.
Resumo:
This thesis comprises four intercomplementary parts that introduce new approaches to brittle reaction layers and mechanical compatibility of metalloceramic joints created when fusing dental ceramics to titanium. Several different methods including atomic layer deposition (ALD), sessile drop contact angle measurements, scanning acoustic microscopy (SAM), three-point bending (TPB, DIN 13 927 / ISO 9693), cross-section microscopy, scanning electron microscopy (SEM), and energy dispersive X-ray spectroscopy (EDS) were employed. The first part investigates the effects of TiO2 layer structure and thickness on the joint strength of the titanium-metalloceramic system. Samples with all tested TiO2 thicknesses displayed good ceramics adhesion to Ti, and uniform TPB results. The fracture mode was independent of oxide layer thickness and structure. Cracking occurred deeper inside titanium, in the oxygen-rich Ti[O]x solid solution surface layer. During dental ceramics firing TiO2 layers dissociate and joints become brittle with increased dissolution of oxygen into metallic Ti and consequent reduction in the metal plasticity. To accomplish an ideal metalloceramic joint this needs to be resolved. The second part introduces photoinduced superhydrophilicity of TiO2. Test samples with ALD deposited anatase TiO2 films were produced. Samples were irradiated with UV light to induce superhydrophilicity of the surfaces through a cascade leading to increased amount of surface hydroxyl groups. Superhydrophilicity (contact angle ~0˚) was achieved within 2 minutes of UV radiation. Partial recovery of the contact angle was observed during the first 10 minutes after UV exposure. Total recovery was not observed within 24h storage. Photoinduced ultrahydrophilicity can be used to enhance wettability of titanium surfaces, an important factor in dental ceramics veneering processes. The third part addresses interlayers designed to restrain oxygen dissolution into Ti during dental ceramics fusing. The main requirements for an ideal interlayer material are proposed. Based on these criteria and systematic exclusion of possible interlayer materials silver (Ag) interlayers were chosen. TPB results were significantly better in when 5 μm Ag interlayers were used compared to only Al2O3-blasted samples. In samples with these Ag interlayers multiple cracks occurred inside dental ceramics, none inside Ti structure. Ag interlayers of 5 μm on Al2O3-blasted samples can be efficiently used to retard formation of the brittle oxygen-rich Ti[O]x layer, thus enhancing metalloceramic joint integrity. The most brittle component in metalloceramic joints with 5 μm Ag interlayers was bulk dental ceramics instead of Ti[O]x. The fourth part investigates the importance of mechanical interlocking. According to the results, the significance of mechanical interlocking achieved by conventional surface treatments can be questioned as long as the formation of the brittle layers (mainly oxygen-rich Ti[O]x) cannot be sufficiently controlled. In summary in contrast to former impressions of thick titanium oxide layers this thesis clearly demonstrates diffusion of oxygen from sintering atmosphere and SiO2 to Ti structures during dental ceramics firing and the following formation of brittle Ti[O]x solid solution as the most important factors predisposing joints between Ti and SiO2-based dental ceramics to low strength. This among other predisposing factors such as residual stresses created by the coefficient of thermal expansion mismatch between dental ceramics and Ti frameworks can be avoided with Ag interlayers.
Resumo:
A straightforward computation of the list of the words (the `tail words' of the list) that are distributionally most similar to a given word (the `head word' of the list) leads to the question: How semantically similar to the head word are the tail words; that is: how similar are their meanings to its meaning? And can we do better? The experiment was done on nearly 18,000 most frequent nouns in a Finnish newsgroup corpus. These nouns are considered to be distributionally similar to the extent that they occur in the same direct dependency relations with the same nouns, adjectives and verbs. The extent of the similarity of their computational representations is quantified with the information radius. The semantic classification of head-tail pairs is intuitive; some tail words seem to be semantically similar to the head word, some do not. Each such pair is also associated with a number of further distributional variables. Individually, their overlap for the semantic classes is large, but the trained classification-tree models have some success in using combinations to predict the semantic class. The training data consists of a random sample of 400 head-tail pairs with the tail word ranked among the 20 distributionally most similar to the head word, excluding names. The models are then tested on a random sample of another 100 such pairs. The best success rates range from 70% to 92% of the test pairs, where a success means that the model predicted my intuitive semantic class of the pair. This seems somewhat promising when distributional similarity is used to capture semantically similar words. This analysis also includes a general discussion of several different similarity formulas, arranged in three groups: those that apply to sets with graded membership, those that apply to the members of a vector space, and those that apply to probability mass functions.
Resumo:
The aim of the study is to examine Luther s theology of music from the standpoint of pleasure. The theological assessment of musical pleasure is related to two further questions: the role of emotions in Christianity and the apprehension of beauty. The medieval discussion of these themes is portrayed in the background chapter. Significant traits were: the suspicion felt towards sensuous gratification in music, music as a mathematical discipline, the medieval theory of emotions informed by Stoic apatheia and Platonic-Aristotelian metriopatheia, the notion of beauty as an attribute of God, medieval aesthetics as the aesthetic of proportion and the aesthetic of light and the emergence of the Aristotelian view of science that is based on experience rather than speculation. The treatment of Luther s theology of music is initiated with the notion of gift. Luther says that music is the excellent (or even the best) gift of God. This has sometimes been understood as a mere music-lover s enthusiasm. Luther is, however, not likely to use the word gift loosely. His theology can be depicted as a theology of gift. The Triune God is categorically giving. The notion of gift also includes reciprocity. When we receive the gifts of God, it evokes praise in us. Praising God is predominantly a musical phenomenon. The particular benefit of music in Luther s thought is that it can move human emotions. This emphasis is connected to the overall affectivity of Luther s theology. In contrast to the medieval discussion, Luther ascribes to saints not just emotions but particularly warm and tender affections. The power of music is related to the auditory and vocal character of the Word. Faith comes through hearing the Word that is at once musical and affective perception. Faith is not a mere opinion but the affective trust of the heart. Music can touch the human heart and persuade with its sweetness, like the good news of the Gospel. Music allows us to perceive Luther s theology as a theology of joy and pleasure. Joy is for Luther a gift of the Holy Spirit that fills the heart and bursts out in voice and gestures. Pleasure appears to be a central aspect to Luther s theology. The problem of the Bondage of the Will is precisely the human inability to feel pleasure in God s will. To be pleased in the visible and tangible creation is not something a Christian should avoid. On the contrary, if one is not pleased with the world that God has created, it is a sign of unbelief and ingratitude. The pleasure of music is aesthetic perception. This in turn necessitates the investigation of Luther s aesthetics. Aesthetic evaluation is not just a part of Luther s thought. Eventually his theology as a whole could be portrayed in aesthetic terms. Luther s extremely positive appreciation of music illutrates his theology as an affective acknowledgement of the goodness of the Creation and faith as an aesthetic contentment.
Resumo:
Trafficking in human beings has become one of the most talked about criminal concerns of the 21st century. But this is not all that it has become. Trafficking has also been declared as one of the most pressing human rights issues of our time. In this sense, it has become a part of the expansion of the human rights phenomenon. Although it is easy to see that the crime of trafficking violates several of the human rights of its victims, it is still, in its essence, a fairly conventional although particularly heinous and often transnational crime, consisting of acts between private actors, and lacking, therefore, the vertical effect associated traditionally with human rights violations. This thesis asks, then, why, and how, has the anti-trafficking campaign been translated in human rights language. And even more fundamentally: in light of the critical, theoretical studies surrounding the expansion of the human rights phenomenon, especially that of Costas Douzinas, who has declared that we have come to the end of human rights as a consequence of the expansion and bureaucratization of the phenomenon, can human rights actually bring salvation to the victims of trafficking? The thesis demonstrates that the translation process of the anti-trafficking campaign into human rights language has been a complicated process involving various actors, including scholars, feminist NGOs, local activists and global human rights NGOs. It has also been driven by a complicated web of interests, the most prevalent one the sincere will to help the victims having become entangled with other aims, such as political, economical, and structural goals. As a consequence of its fragmented background, the human rights approach to trafficking seeks still its final form, consisting of several different claims. After an assessment of these claims from a legal perspective, this thesis concludes that the approach is most relevant regarding the mistreatment of victims of trafficking in the hands of state authorities. It seems to be quite common that authorities have trouble identifying the victims of trafficking, which means that the rights granted to themin international and national documents are not realized in practice, but victims of trafficking are systematically deported as illegal immigrants. It is argued that in order to understand the measures of the authorities, and to assess the usefulness of human rights, it is necessary to adopt a Foucauldian perspective and to observe the measures as biopolitical defence mechanisms. From a biopolitical perspective, the victims of trafficking can be seen as a threat to the population a threat that must be eliminated either by assimilating them to the main population with the help of disciplinary techniques, or by excluding them completely from the society. This biopolitical aim is accomplished through an impenetrable net of seemingly insignificant practices and discourses that not even the participants are aware of. As a result of these practices and discourses, trafficking victims only very few of fit the myth of the perfect victim, produced by biopolitical discourses become invisible and therefore subject to deportation as (risky) illegal immigrants, turning them into bare life in the Agambenian sense, represented by the homo sacer, who cannot be sacrificed, yet does not enjoy the protection of the society and its laws. It is argued, following Jacques Rancière and Slavoj i ek, that human rights can, through their universality and formal equality, provide bare life the tools to formulate political claims and therefore utilize their politicization through their exclusion to return to the sphere of power and politics. Even though human rights have inevitably become entangled with biopolitical practices, they are still perhaps the most efficient way to challenge biopower. Human rights have not, therefore, become useless for the victims of trafficking, but they must be conceived as a universal tool to formulate political claims and challenge power .In the case of trafficking this means that human rights must be utilized to constantly renegotiate the borders of the problematic concept of victim of trafficking created by international instruments, policies and discourses, including those that are sincerely aimed to provide help for the victims.
Resumo:
Road transport and infrastructure has a fundamental meaning for the developing world. Poor quality and inadequate coverage of roads, lack of maintenance operations and outdated road maps continue to hinder economic and social development in the developing countries. This thesis focuses on studying the present state of road infrastructure and its mapping in the Taita Hills, south-east Kenya. The study is included as a part of the TAITA-project by the Department of Geography, University of Helsinki. The road infrastructure of the study area is studied by remote sensing and GIS based methodology. As the principal dataset, true colour airborne digital camera data from 2004, was used to generate an aerial image mosaic of the study area. Auxiliary data includes SPOT satellite imagery from 2003, field spectrometry data of road surfaces and relevant literature. Road infrastructure characteristics are interpreted from three test sites using pixel-based supervised classification, object-oriented supervised classifications and visual interpretation. Road infrastructure of the test sites is interpreted visually from a SPOT image. Road centrelines are then extracted from the object-oriented classification results with an automatic vectorisation process. The road infrastructure of the entire image mosaic is mapped by applying the most appropriate assessed data and techniques. The spectral characteristics and reflectance of various road surfaces are considered with the acquired field spectra and relevant literature. The results are compared with the experimented road mapping methods. This study concludes that classification and extraction of roads remains a difficult task, and that the accuracy of the results is inadequate regardless of the high spatial resolution of the image mosaic used in this thesis. Visual interpretation, out of all the experimented methods in this thesis is the most straightforward, accurate and valid technique for road mapping. Certain road surfaces have similar spectral characteristics and reflectance values with other land cover and land use. This has a great influence for digital analysis techniques in particular. Road mapping is made even more complicated by rich vegetation and tree canopy, clouds, shadows, low contrast between roads and surroundings and the width of narrow roads in relation to the spatial resolution of the imagery used. The results of this thesis may be applied to road infrastructure mapping in developing countries on a more general context, although with certain limits. In particular, unclassified rural roads require updated road mapping schemas to intensify road transport possibilities and to assist in the development of the developing world.
Resumo:
Microbes in natural and artificial environments as well as in the human body are a key part of the functional properties of these complex systems. The presence or absence of certain microbial taxa is a correlate of functional status like risk of disease or course of metabolic processes of a microbial community. As microbes are highly diverse and mostly notcultivable, molecular markers like gene sequences are a potential basis for detection and identification of key types. The goal of this thesis was to study molecular methods for identification of microbial DNA in order to develop a tool for analysis of environmental and clinical DNA samples. Particular emphasis was placed on specificity of detection which is a major challenge when analyzing complex microbial communities. The approach taken in this study was the application and optimization of enzymatic ligation of DNA probes coupled with microarray read-out for high-throughput microbial profiling. The results show that fungal phylotypes and human papillomavirus genotypes could be accurately identified from pools of PCR amplicons generated from purified sample DNA. Approximately 1 ng/μl of sample DNA was needed for representative PCR amplification as measured by comparisons between clone sequencing and microarray. A minimum of 0,25 amol/μl of PCR amplicons was detectable from amongst 5 ng/μl of background DNA, suggesting that the detection limit of the test comprising of ligation reaction followed by microarray read-out was approximately 0,04%. Detection from sample DNA directly was shown to be feasible with probes forming a circular molecule upon ligation followed by PCR amplification of the probe. In this approach, the minimum detectable relative amount of target genome was found to be 1% of all genomes in the sample as estimated from 454 deep sequencing results. Signal-to-noise of contact printed microarrays could be improved by using an internal microarray hybridization control oligonucleotide probe together with a computational algorithm. The algorithm was based on identification of a bias in the microarray data and correction of the bias as shown by simulated and real data. The results further suggest semiquantitative detection to be possible by ligation detection, allowing estimation of target abundance in a sample. However, in practise, comprehensive sequence information of full length rRNA genes is needed to support probe design with complex samples. This study shows that DNA microarray has the potential for an accurate microbial diagnostic platform to take advantage of increasing sequence data and to replace traditional, less efficient methods that still dominate routine testing in laboratories. The data suggests that ligation reaction based microarray assay can be optimized to a degree that allows good signal-tonoise and semiquantitative detection.