24 resultados para Methodology of the conceptual elaboration ferreiriana
Resumo:
Transposable elements, transposons, are discrete DNA segments that are able to move or copy themselves from one locus to another within or between their host genome(s) without a requirement for DNA homology. They are abundant residents in virtually all the genomes studied, for instance, the genomic portion of TEs is approximately 3% in Saccharomyces cerevisiae, 45% in humans, and apparently more than 70% in some plant genomes such as maize and barley. Transposons plays essential role in genome evolution, in lateral transfer of antibiotic resistance genes among bacteria and in life cycle of certain viruses such as HIV-1 and bacteriophage Mu. Despite the diversity of transposable elements they all use a fundamentally similar mechanism called transpositional DNA recombination (transposition) for the movement within and between the genomes of their host organisms. The DNA breakage and joining reactions that underlie their transposition are chemically similar in virtually all known transposition systems. The similarity of the reactions is also reflected in the structure and function of the catalyzing enzymes, transposases and integrases. The transposition reactions take place within the context of a transposition machinery, which can be particularly complex, as in the case of the VLP (virus like particle) machinery of retroelements, which in vivo contains RNA or cDNA and a number of element encoded structural and catalytic proteins. Yet, the minimal core machinery required for transposition comprises a multimer of transposase or integrase proteins and their binding sites at the element DNA ends only. Although the chemistry of DNA transposition is fairly well characterized, the components and function of the transposition machinery have been investigated in detail for only a small group of elements. This work focuses on the identification, characterization, and functional studies of the molecular components of the transposition machineries of BARE-1, Hin-Mu and Mu. For BARE-1 and Hin-Mu transpositional activity has not been shown previously, whereas bacteriophage Mu is a general model of transposition. For BARE-1, which is a retroelement of barley (Hordeum vulgare), the protein and DNA components of the functional VLP machinery were identified from cell extracts. In the case of Hin-Mu, which is a Mu-like prophage in Haemophilus influenzae Rd genome, the components of the core machinery (transposase and its binding sites) were characterized and their functionality was studied by using an in vitro methodology developed for Mu. The function of Mu core machinery was studied for its ability to use various DNA substrates: Hin-Mu end specific DNA substrates and Mu end specific hairpin substrates. The hairpin processing reaction by MuA was characterized in detail. New information was gained of all three machineries. The components or their activity required for functional BARE-1 VLP machinery and retrotransposon life cycle were present in vivo and VLP-like structures could be detected. The Hin-Mu core machinery components were identified and shown to be functional. The components of the Mu and Hin-Mu core machineries were partially interchangeable, reflecting both evolutionary conservation and flexibility within the core machineries. The Mu core machinery displayed surprising flexibility in substrate usage, as it was able to utilize Hin-Mu end specific DNA substrates and to process Mu end DNA hairpin substrates. This flexibility may be evolutionarily and mechanistically important.
Resumo:
The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.
Resumo:
The geomagnetic field is one of the most fundamental geophysical properties of the Earth and has significantly contributed to our understanding of the internal structure of the Earth and its evolution. Paleomagnetic and paleointensity data have been crucial in shaping concepts like continental drift, magnetic reversals, as well as estimating the time when the Earth's core and associated geodynamo processes begun. The work of this dissertation is based on reliable Proterozoic and Holocene geomagnetic field intensity data obtained from rocks and archeological artifacts. New archeomagnetic field intensity results are presented for Finland, Estonia, Bulgaria, Italy and Switzerland. The data were obtained using sophisticated laboratory setups as well as various reliability checks and corrections. Inter-laboratory comparisons between three laboratories (Helsinki, Sofia and Liverpool) were performed in order to check the reliability of different paleointensity methods. The new intensity results fill up considerable gaps in the master curves for each region investigated. In order to interpret the paleointensity data of the Holocene period, a novel and user-friendly database (GEOMAGIA50) was constructed. This provided a new tool to independently test the reliability of various techniques and materials used in paleointensity determinations. The results show that archeological artifacts, if well fired, are the most suitable materials. Also lavas yield reliable paleointensity results, although they appear more scattered. This study also shows that reliable estimates are obtained using the Thellier methodology (and its modifications) with reliability checks. Global paleointensity curves during Paleozoic and Proterozoic have several time gaps with few or no intensity data. To define the global intensity behavior of the Earth's magnetic field during these times new rock types (meteorite impact rocks) were investigated. Two case histories are presented. The Ilyinets (Ukraine) impact melt rocks yielded a reliable paleointensity value at 440 Ma (Silurian), whereas the results from Jänisjärvi impact melts (Russian Karelia, ca. 700 Ma) might be biased towards high intensity values because of non-ideal magnetic mineralogy. The features of the geomagnetic field at 1.1 Ga are not well defined due to problems related to reversal asymmetries observed in Keweenawan data of the Lake Superior region. In this work new paleomagnetic, paleosecular variation and paleointensity results are reported from coeval diabases from Central Arizona and help understanding the asymmetry. The results confirm the earlier preliminary observations that the asymmetry is larger in Arizona than in Lake Superior area. Two of the mechanisms proposed to explain the asymmetry remain plausible: the plate motion and the non-dipole influence.
Resumo:
A model of the information and material activities that comprise the overall construction process is presented, using the SADT activity modelling methodology. The basic model is further refined into a number of generic information handling activities such as creation of new information, information search and retrieval, information distribution and person-to-person communication. The viewpoint could be described as information logistics. This model is then combined with a more traditional building process model, consisting of phases such as design and construction. The resulting two-dimensional matrix can be used for positioning different types of generic IT-tools or construction specific applications. The model can thus provide a starting point for a discussion of the application of information and communication technology in construction and for measurements of the impacts of IT on the overall process and its related costs.
Resumo:
Despite thirty years of research in interorganizational networks and project business within the industrial networks approach and relationship marketing, collective capability of networks of business and other interorganizational actors has not been explicitly conceptualized and studied within the above-named approaches. This is despite the fact that the two approaches maintain that networking is one of the core strategies for the long-term survival of market actors. Recently, many scholars within the above-named approaches have emphasized that the survival of market actors is based on the strength of their networks and that inter-firm competition is being replaced by inter-network competition. Furthermore, project business is characterized by the building of goal-oriented, temporary networks whose aims, structures, and procedures are clarified and that are governed by processes of interaction as well as recurrent contracts. This study develops frameworks for studying and analysing collective network capability, i.e. collective capability created for the network of firms. The concept is first justified and positioned within the industrial networks, project business, and relationship marketing schools. An eclectic source of conceptual input is based on four major approaches to interorganizational business relationships. The study uses qualitative research and analysis, and the case report analyses the empirical phenomenon using a large number of qualitative techniques: tables, diagrams, network models, matrices etc. The study shows the high level of uniqueness and complexity of international project business. While perceived psychic distance between the parties may be small due to previous project experiences and the benefit of existing relationships, a varied number of critical events develop due to the economic and local context of the recipient country as well as the coordination demands of the large number of involved actors. The study shows that the successful creation of collective network capability led to the success of the network for the studied project. The processes and structures for creating collective network capability are encapsulated in a model of governance factors for interorganizational networks. The theoretical and management implications are summarized in seven propositions. The core implication is that project business success in unique and complex environments is achieved by accessing the capabilities of a network of actors, and project management in such environments should be built on both contractual and cooperative procedures with local recipient country parties.
Resumo:
This study contributes to our knowledge of how information contained in financial statements is interpreted and priced by the stock market in two aspects. First, the empirical findings indicate that investors interpret some of the information contained in new financial statements in the context of the information of prior financial statements. Second, two central hypotheses offered in earlier literature to explain the significant connection between publicly available financial statement information and future abnormal returns, that the signals proxy for risk and that the information is priced with a delay, are evaluated utilizing a new methodology. It is found that the mentioned significant connection for some financial statement signals can be explained by that the signals proxy for risk and for other financial statement signals by that the information contained in the signals is priced with a delay.
Resumo:
In this paper, we examine the predictability of observed volatility smiles in three major European index options markets, utilising the historical return distributions of the respective underlying assets. The analysis involves an application of the Black (1976) pricing model adjusted in accordance with the Jarrow-Rudd methodology as proposed in 1982. Thereby we adjust the expected future returns for the third and fourth central moments as these represent deviations from normality in the distributions of observed returns. Thus, they are considered one possible explanation to the existence of the smile. The obtained results indicate that the inclusion of the higher moments in the pricing model to some extent reduces the volatility smile, compared with the unadjusted Black-76 model. However, as the smile is partly a function of supply, demand, and liquidity, and as such intricate to model, this modification does not appear sufficient to fully capture the characteristics of the smile.
Resumo:
The open development model of software production has been characterized as the future model of knowledge production and distributed work. Open development model refers to publicly available source code ensured by an open source license, and the extensive and varied distributed participation of volunteers enabled by the Internet. Contemporary spokesmen of open source communities and academics view open source development as a new form of volunteer work activity characterized by hacker ethic and bazaar governance . The development of the Linux operating system is perhaps the best know example of such an open source project. It started as an effort by a user-developer and grew quickly into a large project with hundreds of user-developer as contributors. However, in hybrids , in which firms participate in open source projects oriented towards end-users, it seems that most users do not write code. The OpenOffice.org project, initiated by Sun Microsystems, in this study represents such a project. In addition, the Finnish public sector ICT decision-making concerning open source use is studied. The purpose is to explore the assumptions, theories and myths related to the open development model by analysing the discursive construction of the OpenOffice.org community: its developers, users and management. The qualitative study aims at shedding light on the dynamics and challenges of community construction and maintenance, and related power relations in hybrid open source, by asking two main research questions: How is the structure and membership constellation of the community, specifically the relation between developers and users linguistically constructed in hybrid open development? What characterizes Internet-mediated virtual communities and how can they be defined? How do they differ from hierarchical forms of knowledge production on one hand and from traditional volunteer communities on the other? The study utilizes sociological, psychological and anthropological concepts of community for understanding the connection between the real and the imaginary in so-called virtual open source communities. Intermediary methodological and analytical concepts are borrowed from discourse and rhetorical theories. A discursive-rhetorical approach is offered as a methodological toolkit for studying texts and writing in Internet communities. The empirical chapters approach the problem of community and its membership from four complementary points of views. The data comprises mailing list discussion, personal interviews, web page writings, email exchanges, field notes and other historical documents. The four viewpoints are: 1) the community as conceived by volunteers 2) the individual contributor s attachment to the project 3) public sector organizations as users of open source 4) the community as articulated by the community manager. I arrive at four conclusions concerning my empirical studies (1-4) and two general conclusions (5-6). 1) Sun Microsystems and OpenOffice.org Groupware volunteers failed in developing necessary and sufficient open code and open dialogue to ensure collaboration thus splitting the Groupware community into volunteers we and the firm them . 2) Instead of separating intrinsic and extrinsic motivations, I find that volunteers unique patterns of motivations are tied to changing objects and personal histories prior and during participation in the OpenOffice.org Lingucomponent project. Rather than seeing volunteers as a unified community, they can be better understood as independent entrepreneurs in search of a collaborative community . The boundaries between work and hobby are blurred and shifting, thus questioning the usefulness of the concept of volunteer . 3) The public sector ICT discourse portrays a dilemma and tension between the freedom to choose, use and develop one s desktop in the spirit of open source on one hand and the striving for better desktop control and maintenance by IT staff and user advocates, on the other. The link between the global OpenOffice.org community and the local end-user practices are weak and mediated by the problematic IT staff-(end)user relationship. 4) Authoring community can be seen as a new hybrid open source community-type of managerial practice. The ambiguous concept of community is a powerful strategic tool for orienting towards multiple real and imaginary audiences as evidenced in the global membership rhetoric. 5) The changing and contradictory discourses of this study show a change in the conceptual system and developer-user relationship of the open development model. This change is characterized as a movement from hacker ethic and bazaar governance to more professionally and strategically regulated community. 6) Community is simultaneously real and imagined, and can be characterized as a runaway community . Discursive-action can be seen as a specific type of online open source engagement. Hierarchies and structures are created through discursive acts. Key words: Open Source Software, open development model, community, motivation, discourse, rhetoric, developer, user, end-user
Resumo:
Road transport and infrastructure has a fundamental meaning for the developing world. Poor quality and inadequate coverage of roads, lack of maintenance operations and outdated road maps continue to hinder economic and social development in the developing countries. This thesis focuses on studying the present state of road infrastructure and its mapping in the Taita Hills, south-east Kenya. The study is included as a part of the TAITA-project by the Department of Geography, University of Helsinki. The road infrastructure of the study area is studied by remote sensing and GIS based methodology. As the principal dataset, true colour airborne digital camera data from 2004, was used to generate an aerial image mosaic of the study area. Auxiliary data includes SPOT satellite imagery from 2003, field spectrometry data of road surfaces and relevant literature. Road infrastructure characteristics are interpreted from three test sites using pixel-based supervised classification, object-oriented supervised classifications and visual interpretation. Road infrastructure of the test sites is interpreted visually from a SPOT image. Road centrelines are then extracted from the object-oriented classification results with an automatic vectorisation process. The road infrastructure of the entire image mosaic is mapped by applying the most appropriate assessed data and techniques. The spectral characteristics and reflectance of various road surfaces are considered with the acquired field spectra and relevant literature. The results are compared with the experimented road mapping methods. This study concludes that classification and extraction of roads remains a difficult task, and that the accuracy of the results is inadequate regardless of the high spatial resolution of the image mosaic used in this thesis. Visual interpretation, out of all the experimented methods in this thesis is the most straightforward, accurate and valid technique for road mapping. Certain road surfaces have similar spectral characteristics and reflectance values with other land cover and land use. This has a great influence for digital analysis techniques in particular. Road mapping is made even more complicated by rich vegetation and tree canopy, clouds, shadows, low contrast between roads and surroundings and the width of narrow roads in relation to the spatial resolution of the imagery used. The results of this thesis may be applied to road infrastructure mapping in developing countries on a more general context, although with certain limits. In particular, unclassified rural roads require updated road mapping schemas to intensify road transport possibilities and to assist in the development of the developing world.