169 resultados para Compressed text search
Resumo:
The designing of effective intervention tools to improve immigrants’ labor market integration remains an important topic in contemporary Western societies. This study examines whether and how a new intervention tool, Working Life Certificate (WLC), helps unemployed immigrants to find employment and strengthen their belief of their vocational skills. The study is based on quantitative longitudinal survey data from 174 unemployed immigrants of various origins who participated in the pilot phase of WLC examinations in 2009. Surveys were administered in three waves: before the test, right after it, and three months later. Although it is often argued that the unemployment among immigrants is due either to their lack of skills and cultural differences or to discrimination in recruitment, scholars within social psychology of behavior change argue that the best way of helping people to achieve their goals (e.g. finding employment) is to build up their sense of self-efficacy, alter their outcome expectances in a more positive direction or to help them to construct more detailed action and coping plans. This study aims to shed light on the role of these concepts in immigrants’ labor market integration. The results support the theories of behavior change moderately. Having positive expectances regarding the outcomes of various job search behaviors was found to predict employment in the future. Together with action and coping planning it also predicted increase in job search behavior. The intervention, WLC, was able to affect participants’ self-efficacy, but contrary to expectations, self-efficacy was found not to be related to either job search behavior or future labor market status. Also, perceived discrimination did not explain problems in finding employment, but hints of subtle or structural discrimination were found. Adoption of Finnish work culture together with strong family culture was found to predict future employment. Hence, in this thesis I argue that awarding people diplomas should be preferred in immigrant integration training as it strengthens people’s sense of self-efficacy. Instead of teaching new information, more attention should be directed at changing people’s outcome expectances in a more positive direction and helping them to construct detailed plans on how to achieve their goals.
Resumo:
As disparities in wealth levels between and within countries become greater many poor people migrate in search of better earning opportunities. Some of this migration is legal but, in many cases, the difficulties involved in securing the necessary documentation mean that would-be migrants resort to illegal methods. This, in turn, makes them vulnerable to human trafficking, a phenomenon that has received growing attention from NGOs, governments and the media in recent years. Despite the attention being given to human trafficking, however, there remains a certain amount of confusion over what exactly it entails though it is generally understood to refer to the transportation and subsequent exploitation of vulnerable people through means of force or deception. The increased attention that has been given to the issue of human trafficking over the last decade has resulted in new discourses emerging which attempt to explain what human trafficking entails, what the root causes of the phenomenon are and how best to tackle the problem. While a certain degree of conceptual clarity has been attained since human trafficking rose to prominence in the 1990s, it could be argued that human trafficking remains a poorly defined concept and that there is frequently confusion concerning the difference between it and related concepts such as people smuggling, migration and prostitution. The thesis examines the ways in which human trafficking has been conceptualised or framed in a specific national context- that of Lao PDR. Attention is given to the task of locating the major frames within which the issue has been situated, as well as considering the diagnoses and prognoses that the various approaches to trafficking suggest. The research considers which particular strands of trafficking discourse have become dominant in Lao PDR and the effect this has had on the kinds of trafficking interventions that have been undertaken in the country. The research is mainly qualitative and consists of an analysis of key texts found in the Lao trafficking discourse.
Resumo:
Objectives: GPS technology enables the visualisation of a map reader s location on a mobile map. Earlier research on the cognitive aspects of map reading identified that searching for map-environment points is an essential element for the process of determining one s location on a mobile map. Map-environment points refer to objects that are visualized on the map and are recognizable in the environment. However, because the GPS usually adds only one point to the map that has a relation to the environment, it does not provide a sufficient amount of information for self-location. The aim of the present thesis was to assess the effect of GPS on the cognitive processes involved in determining one s location on a map. Methods: The effect of GPS on self-location was studied in a field experiment. The subjects were shown a target on a mobile map, and they were asked to point in the direction of the target. In order for the map reader to be able to deduce the direction of the target, he/she has to locate himself/herself on the map. During the pointing tasks, the subjects were asked to think aloud. The data from the experiment were used to analyze the effect of the GPS on the time needed to perform the task. The subjects verbal data was used to assess the effect of the GPS on the number of landmark concepts mentioned during a task (landmark concepts are words referring to objects that can be recognized both on the map and in the environment). Results and conclusions: The results from the experiment indicate that the GPS reduces the time needed to locate oneself on a map. The analysis of the verbal data revealed that the GPS reduces the number of landmark concepts in the protocols. The findings suggest that the GPS guides the subject s search for the map-environment points and narrows the area on the map that must be searched for self-location.
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
Human sport doping control analysis is a complex and challenging task for anti-doping laboratories. The List of Prohibited Substances and Methods, updated annually by World Anti-Doping Agency (WADA), consists of hundreds of chemically and pharmacologically different low and high molecular weight compounds. This poses a considerable challenge for laboratories to analyze for them all in a limited amount of time from a limited sample aliquot. The continuous expansion of the Prohibited List obliges laboratories to keep their analytical methods updated and to research new available methodologies. In this thesis, an accurate mass-based analysis employing liquid chromatography - time-of-flight mass spectrometry (LC-TOFMS) was developed and validated to improve the power of doping control analysis. New analytical methods were developed utilizing the high mass accuracy and high information content obtained by TOFMS to generate comprehensive and generic screening procedures. The suitability of LC-TOFMS for comprehensive screening was demonstrated for the first time in the field with mass accuracies better than 1 mDa. Further attention was given to generic sample preparation, an essential part of screening analysis, to rationalize the whole work flow and minimize the need for several separate sample preparation methods. Utilizing both positive and negative ionization allowed the detection of almost 200 prohibited substances. Automatic data processing produced a Microsoft Excel based report highlighting the entries fulfilling the criteria of the reverse data base search (retention time (RT), mass accuracy, isotope match). The quantitative performance of LC-TOFMS was demonstrated with morphine, codeine and their intact glucuronide conjugates. After a straightforward sample preparation the compounds were analyzed directly without the need for hydrolysis, solvent transfer, evaporation or reconstitution. The hydrophilic interaction technique (HILIC) provided good chromatographic separation, which was critical for the morphine glucuronide isomers. A wide linear range (50-5000 ng/ml) with good precision (RSD<10%) and accuracy (±10%) was obtained, showing comparable or better performance to other methods used. In-source collision-induced dissociation (ISCID) allowed confirmation analysis with three diagnostic ions with a median mass accuracy of 1.08 mDa and repeatable ion ratios fulfilling WADA s identification criteria. The suitability of LC-TOFMS for screening of high molecular weight doping agents was demonstrated with plasma volume expanders (PVE), namely dextran and hydroxyethylstarch (HES). Specificity of the assay was improved, since interfering matrix compounds were removed by size exclusion chromatography (SEC). ISCID produced three characteristic ions with an excellent mean mass accuracy of 0.82 mDa at physiological concentration levels. In summary, by combining TOFMS with a proper sample preparation and chromatographic separation, the technique can be utilized extensively in doping control laboratories for comprehensive screening of chemically different low and high molecular weight compounds, for quantification of threshold substances and even for confirmation. LC-TOFMS rationalized the work flow in doping control laboratories by simplifying the screening scheme, expediting reporting and minimizing the analysis costs. Therefore LC-TOFMS can be exploited widely in doping control, and the need for several separate analysis techniques is reduced.
Resumo:
New chemical entities with unfavorable water solubility properties are continuously emerging in drug discovery. Without pharmaceutical manipulations inefficient concentrations of these drugs in the systemic circulation are probable. Typically, in order to be absorbed from the gastrointestinal tract, the drug has to be dissolved. Several methods have been developed to improve the dissolution of poorly soluble drugs. In this study, the applicability of different types of mesoporous (pore diameters between 2 and 50 nm) silicon- and silica-based materials as pharmaceutical carriers for poorly water soluble drugs was evaluated. Thermally oxidized and carbonized mesoporous silicon materials, ordered mesoporous silicas MCM-41 and SBA-15, and non-treated mesoporous silicon and silica gel were assessed in the experiments. The characteristic properties of these materials are the narrow pore diameters and the large surface areas up to over 900 m²/g. Loading of poorly water soluble drugs into these pores restricts their crystallization, and thus, improves drug dissolution from the materials as compared to the bulk drug molecules. In addition, the wide surface area provides possibilities for interactions between the loaded substance and the carrier particle, allowing the stabilization of the system. Ibuprofen, indomethacin and furosemide were selected as poorly soluble model drugs in this study. Their solubilities are strongly pH-dependent and the poorest (< 100 µg/ml) at low pH values. The pharmaceutical performance of the studied materials was evaluated by several methods. In this work, drug loading was performed successfully using rotavapor and fluid bed equipment in a larger scale and in a more efficient manner than with the commonly used immersion methods. It was shown that several carrier particle properties, in particular the pore diameter, affect the loading efficiency (typically ~25-40 w-%) and the release rate of the drug from the mesoporous carriers. A wide pore diameter provided easier loading and faster release of the drug. The ordering and length of the pores also affected the efficiency of the drug diffusion. However, these properties can also compensate the effects of each other. The surface treatment of porous silicon was important in stabilizing the system, as the non-treated mesoporous silicon was easily oxidized at room temperature. Different surface chemical treatments changed the hydrophilicity of the porous silicon materials and also the potential interactions between the loaded drug and the particle, which further affected the drug release properties. In all of the studies, it was demonstrated that loading into mesoporous silicon and silica materials improved the dissolution of the poorly soluble drugs as compared to the corresponding bulk compounds (e.g. after 30 min ~2-7 times more drug was dissolved depending on the materials). The release profile of the loaded substances remained similar also after 3 months of storage at 30°C/56% RH. The thermally carbonized mesoporous silicon did not compromise the Caco-2 monolayer integrity in the permeation studies and improved drug permeability was observed. The loaded mesoporous silica materials were also successfully compressed into tablets without compromising their characteristic structural and drug releasing properties. The results of this research indicated that mesoporous silicon/silica-based materials are promising materials to improve the dissolution of poorly water soluble drugs. Their feasibility in pharmaceutical laboratory scale processes was also confirmed in this thesis.
Resumo:
XVIII IUFRO World Congress, Ljubljana 1986.
Resumo:
In this paper I will offer a novel understanding of a priori knowledge. My claim is that the sharp distinction that is usually made between a priori and a posteriori knowledge is groundless. It will be argued that a plausible understanding of a priori and a posteriori knowledge has to acknowledge that they are in a constant bootstrapping relationship. It is also crucial that we distinguish between a priori propositions that hold in the actual world and merely possible, non-actual a priori propositions, as we will see when considering cases like Euclidean geometry. Furthermore, contrary to what Kripke seems to suggest, a priori knowledge is intimately connected with metaphysical modality, indeed, grounded in it. The task of a priori reasoning, according to this account, is to delimit the space of metaphysically possible worlds in order for us to be able to determine what is actual.
Resumo:
Usher syndrome (USH) is an inherited blindness and deafness disorder with variable vestibular dysfunction. The syndrome is divided into three subtypes according to the progression and severity of clinical symptoms. The gene mutated in Usher syndrome type 3 (USH3), clarin 1 (CLRN1), was identified in Finland in 2001 and two mutations were identified in Finnish patients at that time. Prior to this thesis study, the two CLRN1 gene mutations were the only USH mutations identified in Finnish USH patients. To further clarify the Finnish USH mutation spectrum, all nine USH genes were studied. Seven mutations were identified: one was a previously known mutation in CLRN1, four were novel mutations in myosin VIIa (MYO7A) and two were a novel and a previously known mutation in usherin (USH2A). Another aim of this thesis research was to further study the structure and function of the CLRN1 gene, and to clarify the effects of mutations on protein function. The search for new splice variants resulted in the identification of eight novel splice variants in addition to the three splice variants that were already known prior to this study. Studies of the possible promoter regions for these splice variants showed the most active region included the 1000 bases upstream of the translation start site in the first exon of the main three exon splice variant. The 232 aa CLRN1 protein encoded by the main (three-exon) splice variant was transported to the plasma membrane when expressed in cultured cells. Western blot studies suggested that CLRN1 forms dimers and multimers. The CLRN1 mutant proteins studied were retained in the endoplasmic reticulum (ER) and some of the USH3 mutations caused CLRN1 to be unstable. During this study, two novel CLRN1 sequence alterations were identified and their pathogenicity was studied with cell culture protein expression. Previous studies with mice had shown that Clrn1 is expressed in mouse cochlear hair cells and spiral ganglion cells, but the expression profile in mouse retina remained unknown. The Clrn1 knockout mice display cochlear cell disruption/death, but do not have a retinal phenotype. The zebrafish, Danio rerio, clrn1 was found to be expressed in hair cells associated with hearing and balance. Clrn1 expression was also found in the inner nuclear layer (INL), photoreceptor layer and retinal pigment epithelium layer (RPE) of the zebrafish retina. When Clrn1 production was knocked down with injected morpholino oligonucleotides (MO) targeting Clrn1 translation or correct splicing, the zebrafish larvae showed symptoms similar to USH3 patients. These larvae had balance/hearing problems and reduced response to visual stimuli. The knowledge this thesis research has provided about the mutations in USH genes and the Finnish USH mutation spectrum are important in USH patient diagnostics. The extended information about the structure and function of CLRN1 is a step further in exploring USH3 pathogenesis caused by mutated CLRN1 as well as a step in finding a cure for the disease.
Resumo:
This monograph describes the emergence of independent research on logic in Finland. The emphasis is placed on three well-known students of Eino Kaila: Georg Henrik von Wright (1916-2003), Erik Stenius (1911-1990), and Oiva Ketonen (1913-2000), and their research between the early 1930s and the early 1950s. The early academic work of these scholars laid the foundations for today's strong tradition in logic in Finland and also became internationally recognized. However, due attention has not been given to these works later, nor have they been comprehensively presented together. Each chapter of the book focuses on the life and work of one of Kaila's aforementioned students, with a fourth chapter discussing works on logic by authors who would later become known within other disciplines. Through an extensive use of correspondence and other archived material, some insight has been gained into the persons behind the academic personae. Unique and unpublished biographical material has been available for this task. The chapter on Oiva Ketonen focuses primarily on his work on what is today known as proof theory, especially on his proof theoretical system with invertible rules that permits a terminating root-first proof search. The independency of the parallel postulate is proved as an example of the strength of root-first proof search. Ketonen was to our knowledge Gerhard Gentzen's (the 'father' of proof theory) only student. Correspondence and a hitherto unavailable autobiographic manuscript, in addition to an unpublished article on the relationship between logic and epistemology, is presented. The chapter on Erik Stenius discusses his work on paradoxes and set theory, more specifically on how a rigid theory of definitions is employed to avoid these paradoxes. A presentation by Paul Bernays on Stenius' attempt at a proof of the consistency of arithmetic is reconstructed based on Bernays' lecture notes. Stenius correspondence with Paul Bernays, Evert Beth, and Georg Kreisel is discussed. The chapter on Georg Henrik von Wright presents his early work on probability and epistemology, along with his later work on modal logic that made him internationally famous. Correspondence from various archives (especially with Kaila and Charlie Dunbar Broad) further discusses his academic achievements and his experiences during the challenging circumstances of the 1940s.
Resumo:
We report on a search for the production of the Higgs boson decaying to two bottom quarks accompanied by two additional quarks. The data sample used corresponds to an integrated luminosity of approximately 4 fb-1 of pp̅ collisions at √s=1.96 TeV recorded by the CDF II experiment. This search includes twice the integrated luminosity of the previous published result, uses analysis techniques to distinguish jets originating from light flavor quarks and those from gluon radiation, and adds sensitivity to a Higgs boson produced by vector boson fusion. We find no evidence of the Higgs boson and place limits on the Higgs boson production cross section for Higgs boson masses between 100 GeV/c2 and 150 GeV/c2 at the 95% confidence level. For a Higgs boson mass of 120 GeV/c2, the observed (expected) limit is 10.5 (20.0) times the predicted standard model cross section.
Resumo:
Nanoclusters are objects made up of several to thousands of atoms and form a transitional state of matter between single atoms and bulk materials. Due to their large surface-to-volume ratio, nanoclusters exhibit exciting and yet poorly studied size dependent properties. When deposited directly on bare metal surfaces, the interaction of the cluster with the substrate leads to alteration of the cluster properties, making it less or even non-functional. Surfaces modified with self-assembled monolayers (SAMs) were shown to form an interesting alternative platform, because of the possibility to control wettability by decreasing the surface reactivity and to add functionalities to pre-formed nanoclusters. In this thesis, the underlying size effects and the influence of the nanocluster environment are investigated. The emphasis is on the structural and magnetic properties of nanoclusters and their interaction with thiol SAMs. We report, for the first time, a ferromagnetic-like spin-glass behaviour of uncapped nanosized Au islands tens of nanometres in size. The flattening kinetics of the nanocluster deposition on thiol SAMs are shown to be mediated mainly by the thiol terminal group, as well as the deposition energy and the particle size distribution. On the other hand, a new mechanism for the penetration of the deposited nanoclusters through the monolayers is presented, which is fundamentally different from those reported for atom deposition on alkanethiols. The impinging cluster is shown to compress the thiol layer against the Au surface and subsequently intercalate at the thiol-Au interface. The compressed thiols try then to straighten and push the cluster away from the surface. Depending on the cluster size, this restoring force may or may not enable a covalent cluster-surface bond formation, giving rise to various cluster-surface binding patterns. Compression and straightening of the thiol molecules pinpoint the elastic nature of the SAMs, which has been investigated in this thesis using nanoindentation. The nanoindenation method has been applied to SAMs of varied tail groups, giving insight into the mechanical properties of thiol modified metal surfaces.
Resumo:
The open development model of software production has been characterized as the future model of knowledge production and distributed work. Open development model refers to publicly available source code ensured by an open source license, and the extensive and varied distributed participation of volunteers enabled by the Internet. Contemporary spokesmen of open source communities and academics view open source development as a new form of volunteer work activity characterized by hacker ethic and bazaar governance . The development of the Linux operating system is perhaps the best know example of such an open source project. It started as an effort by a user-developer and grew quickly into a large project with hundreds of user-developer as contributors. However, in hybrids , in which firms participate in open source projects oriented towards end-users, it seems that most users do not write code. The OpenOffice.org project, initiated by Sun Microsystems, in this study represents such a project. In addition, the Finnish public sector ICT decision-making concerning open source use is studied. The purpose is to explore the assumptions, theories and myths related to the open development model by analysing the discursive construction of the OpenOffice.org community: its developers, users and management. The qualitative study aims at shedding light on the dynamics and challenges of community construction and maintenance, and related power relations in hybrid open source, by asking two main research questions: How is the structure and membership constellation of the community, specifically the relation between developers and users linguistically constructed in hybrid open development? What characterizes Internet-mediated virtual communities and how can they be defined? How do they differ from hierarchical forms of knowledge production on one hand and from traditional volunteer communities on the other? The study utilizes sociological, psychological and anthropological concepts of community for understanding the connection between the real and the imaginary in so-called virtual open source communities. Intermediary methodological and analytical concepts are borrowed from discourse and rhetorical theories. A discursive-rhetorical approach is offered as a methodological toolkit for studying texts and writing in Internet communities. The empirical chapters approach the problem of community and its membership from four complementary points of views. The data comprises mailing list discussion, personal interviews, web page writings, email exchanges, field notes and other historical documents. The four viewpoints are: 1) the community as conceived by volunteers 2) the individual contributor s attachment to the project 3) public sector organizations as users of open source 4) the community as articulated by the community manager. I arrive at four conclusions concerning my empirical studies (1-4) and two general conclusions (5-6). 1) Sun Microsystems and OpenOffice.org Groupware volunteers failed in developing necessary and sufficient open code and open dialogue to ensure collaboration thus splitting the Groupware community into volunteers we and the firm them . 2) Instead of separating intrinsic and extrinsic motivations, I find that volunteers unique patterns of motivations are tied to changing objects and personal histories prior and during participation in the OpenOffice.org Lingucomponent project. Rather than seeing volunteers as a unified community, they can be better understood as independent entrepreneurs in search of a collaborative community . The boundaries between work and hobby are blurred and shifting, thus questioning the usefulness of the concept of volunteer . 3) The public sector ICT discourse portrays a dilemma and tension between the freedom to choose, use and develop one s desktop in the spirit of open source on one hand and the striving for better desktop control and maintenance by IT staff and user advocates, on the other. The link between the global OpenOffice.org community and the local end-user practices are weak and mediated by the problematic IT staff-(end)user relationship. 4) Authoring community can be seen as a new hybrid open source community-type of managerial practice. The ambiguous concept of community is a powerful strategic tool for orienting towards multiple real and imaginary audiences as evidenced in the global membership rhetoric. 5) The changing and contradictory discourses of this study show a change in the conceptual system and developer-user relationship of the open development model. This change is characterized as a movement from hacker ethic and bazaar governance to more professionally and strategically regulated community. 6) Community is simultaneously real and imagined, and can be characterized as a runaway community . Discursive-action can be seen as a specific type of online open source engagement. Hierarchies and structures are created through discursive acts. Key words: Open Source Software, open development model, community, motivation, discourse, rhetoric, developer, user, end-user
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.