13 resultados para In Search of Lost Time
em Helda - Digital Repository of University of Helsinki
Resumo:
Segmentation is a data mining technique yielding simplified representations of sequences of ordered points. A sequence is divided into some number of homogeneous blocks, and all points within a segment are described by a single value. The focus in this thesis is on piecewise-constant segments, where the most likely description for each segment and the most likely segmentation into some number of blocks can be computed efficiently. Representing sequences as segmentations is useful in, e.g., storage and indexing tasks in sequence databases, and segmentation can be used as a tool in learning about the structure of a given sequence. The discussion in this thesis begins with basic questions related to segmentation analysis, such as choosing the number of segments, and evaluating the obtained segmentations. Standard model selection techniques are shown to perform well for the sequence segmentation task. Segmentation evaluation is proposed with respect to a known segmentation structure. Applying segmentation on certain features of a sequence is shown to yield segmentations that are significantly close to the known underlying structure. Two extensions to the basic segmentation framework are introduced: unimodal segmentation and basis segmentation. The former is concerned with segmentations where the segment descriptions first increase and then decrease, and the latter with the interplay between different dimensions and segments in the sequence. These problems are formally defined and algorithms for solving them are provided and analyzed. Practical applications for segmentation techniques include time series and data stream analysis, text analysis, and biological sequence analysis. In this thesis segmentation applications are demonstrated in analyzing genomic sequences.
Resumo:
Buffer zones are vegetated strip-edges of agricultural fields along watercourses. As linear habitats in agricultural ecosystems, buffer strips dominate and play a leading ecological role in many areas. This thesis focuses on the plant species diversity of the buffer zones in a Finnish agricultural landscape. The main objective of the present study is to identify the determinants of floral species diversity in arable buffer zones from local to regional levels. This study was conducted in a watershed area of a farmland landscape of southern Finland. The study area, Lepsämänjoki, is situated in the Nurmijärvi commune 30 km to the north of Helsinki, Finland. The biotope mosaics were mapped in GIS. A total of 59 buffer zones were surveyed, of which 29 buffer strips surveyed were also sampled by plot. Firstly, two diversity components (species richness and evenness) were investigated to determine whether the relationship between the two is equal and predictable. I found no correlation between species richness and evenness. The relationship between richness and evenness is unpredictable in a small-scale human-shaped ecosystem. Ordination and correlation analyses show that richness and evenness may result from different ecological processes, and thus should be considered separately. Species richness correlated negatively with phosphorus content, and species evenness correlated negatively with the ratio of organic carbon to total nitrogen in soil. The lack of a consistent pattern in the relationship between these two components may be due to site-specific variation in resource utilization by plant species. Within-habitat configuration (width, length, and area) were investigated to determine which is more effective for predicting species richness. More species per unit area increment could be obtained from widening the buffer strip than from lengthening it. The width of the strips is an effective determinant of plant species richness. The increase in species diversity with an increase in the width of buffer strips may be due to cross-sectional habitat gradients within the linear patches. This result can serve as a reference for policy makers, and has application value in agricultural management. In the framework of metacommunity theory, I found that both mass effect(connectivity) and species sorting (resource heterogeneity) were likely to explain species composition and diversity on a local and regional scale. The local and regional processes were interactively dominated by the degree to which dispersal perturbs local communities. In the lowly and intermediately connected regions, species sorting was of primary importance to explain species diversity, while the mass effect surpassed species sorting in the highly connected region. Increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities, and consequently, to lower regional diversity, while local species richness was unrelated to the habitat connectivity. Of all species found, Anthriscus sylvestris, Phalaris arundinacea, and Phleum pretense significantly responded to connectivity, and showed high abundance in the highly connected region. We suggest that these species may play a role in switching the force from local resources to regional connectivity shaping the community structure. On the landscape context level, the different responses of local species richness and evenness to landscape context were investigated. Seven landscape structural parameters served to indicate landscape context on five scales. On all scales but the smallest scales, the Shannon-Wiener diversity of land covers (H') correlated positively with the local richness. The factor (H') showed the highest correlation coefficients in species richness on the second largest scale. The edge density of arable field was the only predictor that correlated with species evenness on all scales, which showed the highest predictive power on the second smallest scale. The different predictive power of the factors on different scales showed a scaledependent relationship between the landscape context and local plant species diversity, and indicated that different ecological processes determine species richness and evenness. The local richness of species depends on a regional process on large scales, which may relate to the regional species pool, while species evenness depends on a fine- or coarse-grained farming system, which may relate to the patch quality of the habitats of field edges near the buffer strips. My results suggested some guidelines of species diversity conservation in the agricultural ecosystem. To maintain a high level of species diversity in the strips, a high level of phosphorus in strip soil should be avoided. Widening the strips is the most effective mean to improve species richness. Habitat connectivity is not always favorable to species diversity because increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities (beta diversity) and, consequently, to lower regional diversity. Overall, a synthesis of local and regional factors emerged as the model that best explain variations in plant species diversity. The studies also suggest that the effects of determinants on species diversity have a complex relationship with scale.
Resumo:
In this paper I will offer a novel understanding of a priori knowledge. My claim is that the sharp distinction that is usually made between a priori and a posteriori knowledge is groundless. It will be argued that a plausible understanding of a priori and a posteriori knowledge has to acknowledge that they are in a constant bootstrapping relationship. It is also crucial that we distinguish between a priori propositions that hold in the actual world and merely possible, non-actual a priori propositions, as we will see when considering cases like Euclidean geometry. Furthermore, contrary to what Kripke seems to suggest, a priori knowledge is intimately connected with metaphysical modality, indeed, grounded in it. The task of a priori reasoning, according to this account, is to delimit the space of metaphysically possible worlds in order for us to be able to determine what is actual.
Resumo:
Starting point in the European individualistic copyright ideology is that an individual author creates a work and controls the use of it. However, this paper argues that it is (and has always been) impossible to control the use of works after their publication. This has also been acknowledged by the legislator, who has introduced collective licensing agreements because of this impossibility. Since it is impossible to rigorously control the use of works this writing "Rough Justice or Zero Tolerance - Reassessing the Nature of Copyright in Light of Collective Licensing" examines what reality of copyright is actually about. Finding alternative (and hopefully more "true") ways to understand copyright helps us to create alternative solutions in order to solve possible problems we have as it comes e.g. to use of content in online environment. The paper makes a claim that copyright is actually about defining negotiation points for different stakeholders and that nothing in the copyright reality prevents us from defining e.g. a new negotiation point where representatives of consumers would meet representatives of right holders in order to agree on the terms of use for certain content types in online environment.
Resumo:
This study examines strategies used to translate various thematic and character delineating allusions in two of Reginald Hill's detective novels, The Wood Beyond and On Beulah Height and their Swedish translations Det mörka arvet and Dalen som dränktes. In this study, thematic allusions and allusions used in character delineation are regarded as intertextual networks. Intertextual networks comprise all the texts that are in one way or another embedded into a text, all the texts referred to in it and even the texts somehow rejected from a text's own canon. Studying allusions as intertextual networks makes it warranted to pay minute attention to even the smallest of details. Seen together, these little details form extensive networks of meaning that readers use to interpret the text. Allusion can be defined as a reference, often covert or indirect, to another text in a way that brings into the text some of the associations of that other text. A text is here understood broadly, hence sources of allusions include all cultural texts from literature and history to cinema and televisions serials. Allusions are culture bound and each culture tends to allude to its own cultural products. The set of transcultural allusions is therefore fairly small. Translation strategies are translatorial ways of solving translation problems. Being culture-bound, allusions are potential translation problems. In order to transmit the thoughts evoked by the allusions in source text readers to the target text readers translators may add guidance to the translated text. Often guidance is not added, which may result in changes in handling of themes or character delineation, clear in the source text but confusing or incomprehensible in the target text. However, norms in target culture may not always allow the translators the possibility to make the text comprehensible. My analyses of translation strategies show that in the two translated novels studied minimum change is a very frequently used strategy. This results in themes and character delineation losing some of the effect they have in the source texts. Perhaps surprisingly, the result is very much the same even where it is possible to discern that the two translators have had differing translation principles. Keywords: allusions, intertextuality, literary translation, translation strategies, norms, crime fiction, Hill, Reginald
Resumo:
The present study focuses on the translational strategies of Cocksfoot mottle virus (CfMV, genus Sobemovirus), which infects monocotyledonous plants. CfMV RNA lacks the 5'cap and the 3'poly(A) tail that ensure efficient translation of cellular messenger RNAs (mRNAs). Instead, CfMV RNA is covalently linked to a viral protein VPg (viral protein, genome-linked). This indicates that the viral untranslated regions (UTRs) must functionally compensate for the lack of the cap and poly(A) tail. We examined the efficacy of translation initiation in CfMV by comparing it to well-studied viral translational enhancers. Although insertion of the CfMV 5'UTR (CfMVe) into plant expression vectors improved gene expression in barley more than the other translational enhancers examined, studies at the RNA level showed that CfMVe alone or in combination with the CfMV 3'UTR did not provide the RNAs translational advantage. Mutation analysis revealed that translation initiation from CfMVe involved scanning. Interestingly, CfMVe also promoted translation initiation from an intercistronic position of dicistronic mRNAs in vitro. Furthermore, internal initiation occurred with similar efficacy in translation lysates that had reduced concentrations of eukaryotic initiation factor (eIF) 4E, suggesting that initiation was independent of the eIF4E. In contrast, reduced translation in the eIF4G-depleted lysates indicated that translation from internally positioned CfMVe was eIF4G-dependent. After successful translation initiation, leaky scanning brings the ribosomes to the second open reading frame (ORF). The CfMV polyprotein is produced from this and the following overlapping ORF via programmed -1 ribosomal frameshift (-1 PRF). Two signals in the mRNA at the beginning of the overlap program approximately every fifth ribosome to slip one nucleotide backwards and continue translation in the new -1 frame. This leads to the production of C-terminally extended polyprotein, which encodes the viral RNA-dependent RNA polymerase (RdRp). The -1 PRF event in CfMV was very efficient, even though it was programmed by a simple stem-loop structure instead of a pseudoknot, which is usually required for high -1 PRF frequencies. Interestingly, regions surrounding the -1 PRF signals improved the -1 PRF frequencies. Viral protein P27 inhibited the -1 PRF event in vivo, putatively by binding to the -1 PRF site. This suggested that P27 could regulate the occurrence of -1 PRF. Initiation of viral replication requires that viral proteins are released from the polyprotein. This is catalyzed by viral serine protease, which is also encoded from the polyprotein. N-terminal amino acid sequencing of CfMV VPg revealed that the junction of the protease and VPg was cleaved between glutamate (E) and asparagine (N) residues. This suggested that the processing sites used in CfMV differ from the glutamate and serine (S) or threonine (T) sites utilized in other sobemoviruses. However, further analysis revealed that the E/S and E/T sites may be used to cleave out some of the CfMV proteins.
Resumo:
We report the recent charged Higgs search in top quark decays in 2.2/fb CDF data. This is the first attempt to search for charged Higgs using fully reconstructed mass assuming H->c-sbar in small tan beta region. No evidence of a charged Higgs is observed in the CDF data, hence 95% upper limits are placed at B(t->H+b)
Resumo:
Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.
Resumo:
Although the principle of equal access to medically justified treatment has been promoted by official health policies in many Western health care systems, practices do not completely meet policy targets. Waiting times for elective surgery vary between patient groups and regions, and growing problems in the availability of services threaten equal access to treatment. Waiting times have come to the attention of decision-makers, and several policy initiatives have been introduced to ensure the availability of care within a reasonable time. In Finland, for example, the treatment guarantee came into force in 2005. However, no consensus exists on optimal waiting time for different patient groups. The purpose of this multi-centre randomized controlled trial was to analyse health-related quality of life, pain and physical function in total hip or knee replacement patients during the waiting time and to evaluate whether the waiting time is associated with patients health outcomes at admission. This study also assessed whether the length of waiting time is associated with social and health services utilization in patients awaiting total hip or knee replacement. In addition, patients health-related quality of life was compared with that of the general population. Consecutive patients with a need for a primary total hip or knee replacement due to osteoarthritis were placed on the waiting list between August 2002 and November 2003. Patients were randomly assigned to a short waiting time (maximum 3 months) or a non-fixed waiting time (waiting time not fixed in advance, instead the patient followed the hospitals routine practice). Patients health-related quality of life was measured upon being placed on the waiting list and again at hospital admission using the generic 15D instrument. Pain and physical function were evaluated using the self-report Harris Hip Score for hip patients and a scale modified from the Knee Society Clinical Rating System for knee patients. Utilization measures were the use of home health care, rehabilitation and social services, physician visits and inpatient care. Health and social services use was low in both waiting time groups. The most common services used while waiting were rehabilitation services and informal care, including unpaid care provided by relatives, neighbours and volunteers. Although patients suffered from clear restrictions in usual activities and physical functioning, they seemed primarily to lean on informal care and personal networks instead of professional care. While longer waiting time did not result in poorer health-related quality of life at admission and use of services during the waiting time was similar to that at the time of placement on the list, there is likely to be higher costs of waiting by people who wait longer simply because they are using services for a longer period. In economic terms, this would represent a negative impact of waiting. Only a few reports have been published of the health-related quality of life of patients awaiting total hip or knee replacement. These findings demonstrate that, in addition to physical dimensions of health, patients suffered from restrictions in psychological well-being such as depression, distress and reduced vitality. This raises the question of how to support patients who suffer from psychological distress during the waiting time and how to develop strategies to improve patients initiatives to reduce symptoms and the burden of waiting. Key words: waiting time, total hip replacement, total knee replacement, health-related quality of life, randomized controlled trial, outcome assessment, social service, utilization of health services
Resumo:
The purpose of this study is to analyse the development and understanding of the idea of consensus in bilateral dialogues among Anglicans, Lutherans and Roman Catholics. The source material consists of representative dialogue documents from the international, regional and national dialogues from the 1960s until 2006. In general, the dialogue documents argue for agreement/consensus based on commonality or compatibility. Each of the three dialogue processes has specific characteristics and formulates its argument in a unique way. The Lutheran-Roman Catholic dialogue has a particular interest in hermeneutical questions. In the early phases, the documents endeavoured to describe the interpretative principles that would allow the churches to together proclaim the Gospel and to identify the foundation on which the agreement in the church is based. This investigation ended up proposing a notion of basic consensus , which later developed into a form of consensus that seeks to embrace, not to dismiss differences (so-called differentiated consensus ). The Lutheran-Roman Catholic agreement is based on a perspectival understanding of doctrine. The Anglican-Roman Catholic dialogue emphasises the correctness of interpretations. The documents consciously look towards a common future , not the separated past. The dialogue s primary interpretative concept is koinonia. The texts develop a hermeneutics of authoritative teaching that has been described as the rule of communion . The Anglican-Lutheran dialogue is characterised by an instrumental understanding of doctrine. Doctrinal agreement is facilitated by the ideas of coherence, continuity and substantial emphasis in doctrine. The Anglican-Lutheran dialogue proposes a form of sufficient consensus that considers a wide set of doctrinal statements and liturgical practices to determine whether an agreement has been reached to the degree that, although not complete , is sufficient for concrete steps towards unity. Chapter V discusses the current challenges of consensus as an ecumenically viable concept. In this part, I argue that the acceptability of consensus as an ecumenical goal is based not only the understanding of the church but more importantly on the understanding of the nature and function of the doctrine. The understanding of doctrine has undergone significant changes during the time of the ecumenical dialogues. The major shift has been from a modern paradigm towards a postmodern paradigm. I conclude with proposals towards a way to construct a form of consensus that would survive philosophical criticism, would be theologically valid and ecumenically acceptable.
Resumo:
Population dynamics are generally viewed as the result of intrinsic (purely density dependent) and extrinsic (environmental) processes. Both components, and potential interactions between those two, have to be modelled in order to understand and predict dynamics of natural populations; a topic that is of great importance in population management and conservation. This thesis focuses on modelling environmental effects in population dynamics and how effects of potentially relevant environmental variables can be statistically identified and quantified from time series data. Chapter I presents some useful models of multiplicative environmental effects for unstructured density dependent populations. The presented models can be written as standard multiple regression models that are easy to fit to data. Chapters II IV constitute empirical studies that statistically model environmental effects on population dynamics of several migratory bird species with different life history characteristics and migration strategies. In Chapter II, spruce cone crops are found to have a strong positive effect on the population growth of the great spotted woodpecker (Dendrocopos major), while cone crops of pine another important food resource for the species do not effectively explain population growth. The study compares rate- and ratio-dependent effects of cone availability, using state-space models that distinguish between process and observation error in the time series data. Chapter III shows how drought, in combination with settling behaviour during migration, produces asymmetric spatially synchronous patterns of population dynamics in North American ducks (genus Anas). Chapter IV investigates the dynamics of a Finnish population of skylark (Alauda arvensis), and point out effects of rainfall and habitat quality on population growth. Because the skylark time series and some of the environmental variables included show strong positive autocorrelation, the statistical significances are calculated using a Monte Carlo method, where random autocorrelated time series are generated. Chapter V is a simulation-based study, showing that ignoring observation error in analyses of population time series data can bias the estimated effects and measures of uncertainty, if the environmental variables are autocorrelated. It is concluded that the use of state-space models is an effective way to reach more accurate results. In summary, there are several biological assumptions and methodological issues that can affect the inferential outcome when estimating environmental effects from time series data, and that therefore need special attention. The functional form of the environmental effects and potential interactions between environment and population density are important to deal with. Other issues that should be considered are assumptions about density dependent regulation, modelling potential observation error, and when needed, accounting for spatial and/or temporal autocorrelation.
Resumo:
As disparities in wealth levels between and within countries become greater many poor people migrate in search of better earning opportunities. Some of this migration is legal but, in many cases, the difficulties involved in securing the necessary documentation mean that would-be migrants resort to illegal methods. This, in turn, makes them vulnerable to human trafficking, a phenomenon that has received growing attention from NGOs, governments and the media in recent years. Despite the attention being given to human trafficking, however, there remains a certain amount of confusion over what exactly it entails though it is generally understood to refer to the transportation and subsequent exploitation of vulnerable people through means of force or deception. The increased attention that has been given to the issue of human trafficking over the last decade has resulted in new discourses emerging which attempt to explain what human trafficking entails, what the root causes of the phenomenon are and how best to tackle the problem. While a certain degree of conceptual clarity has been attained since human trafficking rose to prominence in the 1990s, it could be argued that human trafficking remains a poorly defined concept and that there is frequently confusion concerning the difference between it and related concepts such as people smuggling, migration and prostitution. The thesis examines the ways in which human trafficking has been conceptualised or framed in a specific national context- that of Lao PDR. Attention is given to the task of locating the major frames within which the issue has been situated, as well as considering the diagnoses and prognoses that the various approaches to trafficking suggest. The research considers which particular strands of trafficking discourse have become dominant in Lao PDR and the effect this has had on the kinds of trafficking interventions that have been undertaken in the country. The research is mainly qualitative and consists of an analysis of key texts found in the Lao trafficking discourse.
Resumo:
Maurice Merleau-Ponty (1908-1961) has been known as the philosopher of painting. His interest in the theory of perception intertwined with the questions concerning the artist s perception, the experience of an artwork and the possible interpretations of the artwork. For him, aesthetics was not a sub-field of philosophy, and art was not simply a subject matter for the aesthetic experience, but a form of thinking. This study proposes an opening for a dialogue between Merleau-Pontian phenomenology and contemporary art. The thesis examines his phenomenology through certain works of contemporary art and presents readings of these artworks through his phenomenology. The thesis both shows the potentiality of a method, but also engages in the critical task of finding the possible limitations of his approach. The first part lays out the methodological and conceptual points of departure of Merleau-Ponty s phenomenological approach to perception as well as the features that determined his discussion on encountering art. Merleau-Ponty referred to the experience of perceiving art using the notion of seeing with (voir selon). He stressed a correlative reciprocity described in Eye and Mind (1961) as the switching of the roles of the visible and the painter. The choice of artworks is motivated by certain restrictions in the phenomenological readings of visual arts. The examined works include paintings by Tiina Mielonen, a photographic work by Christian Mayer, a film by Douglas Gordon and Philippe Parreno, and an installation by Monika Sosnowska. These works resonate with, and challenge, his phenomenological approach. The chapters with case studies take up different themes that are central to Merleau-Ponty s phenomenology: space, movement, time, and touch. All of the themes are interlinked with the examined artworks. There are also topics that reappear in the thesis, such as the notion of écart and the question of encountering the other. As Merleau-Ponty argued, the sphere of art has a particular capability to address our being in the world. The thesis presents an interpretation that emphasises the notion of écart, which refers to an experience of divergence or dispossession. The sudden dissociation, surprise or rupture that is needed in order for a meeting between the spectator and the artwork, or between two persons, to be possible. Further, the thesis suggests that through artworks it is possible to take into consideration the écart, the divergence, that defines our subjectivity.