28 resultados para Dynamic efficiency

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The model simulates the growth mechanism of a pig under optional feeding and slaughter patterns and then solves the optimal feeding and slaughter decisions recursively. The state of nature and the genotype of a pig are known in the analysis. The main contribution of this study is the dynamic approach that explicitly takes into account carcass quality while simultaneously optimising feeding and slaughter decisions. The method maximises the internal rate of return to the capacity unit. Hence, the results can have vital impact on competitiveness of pig production, which is known to be quite capital-intensive. The results suggest that producer can significantly benefit from improvements in the pig's genotype, because they improve efficiency of pig production. The annual benefits from obtaining pigs of improved genotype can be more than €20 per capacity unit. The annual net benefits of animal breeding to pig farms can also be considerable. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results suggest that the producer can benefit from flexible feeding technology. The flexible feeding technology segregates pigs into groups according to their weight, carcass leanness, genotype and sex and thereafter optimises feeding and slaughter decisions separately for these groups. Typically, such a technology provides incentives to feed piglets with protein-rich feed such that the genetic potential to produce leaner meat is fully utilised. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig's biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects. Key words: barley, carcass composition, dynamic programming, feeding, genotypes, lean, pig fattening, precision agriculture, productivity, slaughter weight, soybeans

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This academic work begins with a compact presentation of the general background to the study, which also includes an autobiography for the interest in this research. The presentation provides readers who know little of the topic of this research and of the structure of the educational system as well as of the value given to education in Nigeria. It further concentrates on the dynamic interplay of the effect of academic and professional qualification and teachers' job effectiveness in secondary schools in Nigeria in particular, and in Africa in general. The aim of this study is to produce a systematic analysis and rich theoretical and empirical description of teachers' teaching competencies. The theoretical part comprises a comprehensive literature review that focuses on research conducted in the areas of academic and professional qualification and teachers' job effectiveness, teaching competencies, and the role of teacher education with particular emphasis on school effectiveness and improvement. This research benefits greatly from the functionalist conception of education, which is built upon two emphases: the application of the scientific method to the objective social world, and the use of an analogy between the individual 'organism' and 'society'. To this end, it offers us an opportunity to define terms systematically and to view problems as always being interrelated with other components of society. The empirical part involves describing and interpreting what educational objectives can be achieved with the help of teachers' teaching competencies in close connection to educational planning, teacher training and development, and achieving them without waste. The data used in this study were collected between 2002 and 2003 from teachers, principals, supervisors of education from the Ministry of Education and Post Primary Schools Board in the Rivers State of Nigeria (N=300). The data were collected from interviews, documents, observation, and questionnaires and were analyzed using both qualitative and quantitative methods to strengthen the validity of the findings. The data collected were analyzed to answer the specific research questions and hypotheses posited in this study. The data analysis involved the use of multiple statistical procedures: Percentages Mean Point Value, T-test of Significance, One-Way Analysis of Variance (ANOVA), and Cross Tabulation. The results obtained from the data analysis show that teachers require professional knowledge and professional teaching skills, as well as a broad base of general knowledge (e.g., morality, service, cultural capital, institutional survey). Above all, in order to carry out instructional processes effectively, teachers should be both academically and professionally trained. This study revealed that teachers are not however expected to have an extraordinary memory, but rather looked upon as persons capable of thinking in the right direction. This study may provide a solution to the problem of teacher education and school effectiveness in Nigeria. For this reason, I offer this treatise to anyone seriously committed in improving schools in developing countries in general and in Nigeria in particular to improve the lives of all its citizens. In particular, I write this to encourage educational planners, education policy makers, curriculum developers, principals, teachers, and students of education interested in empirical information and methods to conceptualize the issue this study has raised and to provide them with useful suggestions to help them improve secondary schooling in Nigeria. Though, multiple audiences exist for any text. For this reason, I trust that the academic community will find this piece of work a useful addition to the existing literature on school effectiveness and school improvement. Through integrating concepts from a number of disciplines, I aim to describe as holistic a representation as space could allow of the components of school effectiveness and quality improvement. A new perspective on teachers' professional competencies, which not only take into consideration the unique characteristics of the variables used in this study, but also recommend their environmental and cultural derivation. In addition, researchers should focus their attention on the ways in which both professional and non-professional teachers construct and apply their methodological competencies, such as their grouping procedures and behaviors to the schooling of students. Keywords: Professional Training, Academic Training, Professionally Qualified, Academically Qualified, Professional Qualification, Academic Qualification, Job Effectiveness, Job Efficiency, Educational Planning, Teacher Training and Development, Nigeria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Androgens control a variety of developmental processes that create the male phenotype and are important for maintaining male fertility and normal functions of tissues and organs that are not directly involved in procreation. Androgen receptor (AR) that mediates the biological actions of androgens is a member of the nuclear receptor superfamily of ligand-inducible transcription factors. Although AR was cloned over 15 years ago, the mechanisms by which it regulates gene expression are not well understood. A growing body of in vitro experimental evidence suggests that a complex network of proteins is involved in the androgen-dependent transcriptional regulation. However, the process of AR-dependent transcriptional regulation under physiological conditions is largely elusive. In the present study, a series of experiments were performed, including quantitative chromatin immunoprecipitation (ChIP) assays, to investigate AR-mediated transcription process using living prostate cancer cells. Our results show that the loading of AR and recruitment of coactivators and RNA polymerase II (Pol II) to both the promoter and enhancer of AR target genes are a transient and cyclic event that in addition to hyperacetylation, also involves dynamic changes in methylation, phosphorylation of core histone H3 in androgen-treated LNCaP cells. The dynamics of testosterone (T)-induced loading of AR onto the proximal promoters of the genes clearly differed from that loaded onto the distal enhancers. Significantly, more holo-AR was loaded onto the enhancers than the promoters, but the principal Pol II transcription complex was assembled on the promoters. By contrast, the pure antiandrogen bicalutamide (CDX) complexed to AR elicited occupancy of the PSA promoter, but was unable to load onto the PSA enhancer and was incapable of recruiting Pol II, coactivators and following changes of covalent histone modifications. The partial antagonist cyproterone acetate (CPA) and mifepristone (RU486) were capable of promoting AR loading onto both the PSA promoter and enhancer at a comparable efficiency with androgen in LNCaP cells expressing mutant AR. However, CPA- and RU486-bound AR not only recruited Pol II and coactivator p300 and GRIP1 onto the promoter and enhancer, but also recruited the corepressor NCoR onto the promoter as efficiently as CDX. In addition, we demonstrate that both proteasome and protein kinases are implicated in AR-mediated transcription. Even though proteasome inhibitor MG132 and protein kinase inhibitor DRB (5, 6-Dichlorobenzimidazole riboside) can block ligand-dependent accumulation of PSA mRNA with same efficiency, their use results in different molecular profiles in terms of the formation of AR-mediated transcriptional complex. Collectively, these results indicate that transcriptional activation by AR is a complicated process, which includes transient loading of holo-AR and recruitment of Pol II and coregulators accompanied by a cascade of distinct covalent histone modifications; This process involves both the promoter and enhancer elements, as well as other general components of the cell machineries e.g. proteasome and protein kinase; The pure antiandrogen CDX and the partial antagonist CPA and RU486 exhibit clearly different profiles in terms of their ability to induce the formation of AR-dependent transcriptional complexes and the histone modifications associated with the target genes in human prostate cancer cells. Finally, by using quantitative RT-PCR to compare the expression of sixteen AR co-regulators in prostate cancer cell lines, xenografts, and clinical prostate cancer specimens we suggest that AR co-regulators protein inhibitor of activated STAT1 (PIAS1) and steroid receptor coactivator 1(SRC1) could be involved in the progression of prostate cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nitrogen (N) is one of the main inputs in cereal cultivation and as more than half of the arable land in Finland is used for cereal production, N has contributed substantially to agricultural pollution through fertilizer leaching and runoff. Based on this global phenomenon, the European Community has launched several directives to reduce agricultural emissions to the environment. Trough such measures, and by using economic incentives, it is expected that northern European agricultural practices will, in the future, include reduced N fertilizer application rates. Reduced use of N fertilizer is likely to decrease both production costs and pollution, but could also result in reduced yields and quality if crops experience temporary N deficiency. Therefore, more efficient N use in cereal production, to minimize pollution risks and maximize farmer income, represents a current challenge for agronomic research in the northern growing areas. The main objective of this study was to determine the differences in nitrogen use efficiency (NUE) among spring cereals grown in Finland. Additional aims were to characterize the multiple roles of NUE by analysing the extent of variation in NUE and its component traits among different cultivars, and to understand how other physiological traits, especially radiation use efficiency (RUE) and light interception, affect and interact with the main components of NUE and contribute to differences among cultivars. This study included cultivars of barley (Hordeum vulgare L.), oat (Avena sativa L.) and wheat (Triticum aestivum L.). Field experiments were conducted between 2001 and 2004 at Jokioinen, in Finland. To determine differences in NUE among cultivars and gauge the achievements of plant breeding in NUE, 17-18 cultivars of each of the three cereal species released between 1909 and 2002 were studied. Responses to nitrogen of landraces, old cultivars and modern cultivars of each cereal species were evaluated under two N regimes (0 and 90 kg N ha-1). Results of the study revealed that modern wheat, oat and barley cultivars had similar NUE values under Finnish growing conditions and only results from a wider range of cultivars indicated that wheat cultivars could have lower NUE than the other species. There was a clear relationship between nitrogen uptake efficiency (UPE) and NUE in all species whereas nitrogen utilization efficiency (UTE) had a strong positive relationship with NUE only for oat. UTE was clearly lower in wheat than in other species. Other traits related to N translocation indicated that wheat also had a lower harvest index, nitrogen harvest index and nitrogen remobilisation efficiency and therefore its N translocation efficiency was confirmed to be very low. On the basis of these results there appears to be potential and also a need for improvement in NUE. These results may help understand the underlying physiological differences in NUE and could help to identify alternative production options, such as the different roles that species can play in crop rotations designed to meet the demands of modern agricultural practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to evaluate intensity, productivity and efficiency in agriculture in Finland and show implications for N and P fertiliser management. Environmental concerns relating to agricultural production have been and still are focused on arguments about policies that affect agriculture. These policies constrain production while demand for agricultural products such as food, fibre and energy continuously increase. Therefore the importance of increasing productivity is a great challenge to agriculture. Over the last decades producers have experienced several large changes in the production environment such as the policy reform when Finland joined the EU 1995. Other and market changes occurred with the further EU enlargement with neighbouring countries in 2005 and with the decoupling of supports over the 2006-2007 period. Decreasing prices a decreased number of farmers and decreased profitability in agricultural production have resulted from these changes and constraints and of technological development. It is known that the accession to the EU 1995 would herald changes in agriculture. Especially of interest was how the sudden changes in prices of commodities on especially those of cereals, decreased by 60%, would influence agricultural production. The knowledge of properties of the production function increased in importance as a consequence of price changes. A research on the economic instruments to regulate productions was carried out and combined with earlier studies in paper V. In paper I the objective was to compare two different technologies, the conventional farming and the organic farming, determine differences in productivity and technical efficiency. In addition input specific or environmental efficiencies were analysed. The heterogeneity of agricultural soils and its implications were analysed in article II. In study III the determinants of technical inefficiency were analysed. The aspects and possible effects of the instability in policies due to a partial decoupling of production factors and products were studied in paper IV. Consequently connection between technical efficiency based on the turnover and the sales return was analysed in this study. Simple economic instruments such as fertiliser taxes have a direct effect on fertiliser consumption and indirectly increase the value of organic fertilisers. However, fertiliser taxes, do not fully address the N and P management problems adequately and are therefore not suitable for nutrient management improvements in general. Productivity of organic farms is lower on average than conventional farms and the difference increases when looking at selling returns only. The organic sector needs more research and development on productivity. Livestock density in organic farming increases productivity, however, there is an upper limit to livestock densities on organic farms and therefore nutrient on organic farms are also limited. Soil factors affects phosphorous and nitrogen efficiency. Soils like sand and silt have lower input specific overall efficiency for nutrients N and P. Special attention is needed for the management on these soils. Clay soils and soils with moderate clay content have higher efficiency. Soil heterogeneity is cause for an unavoidable inefficiency in agriculture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Finland one of the most important current issues in the environmental management is the quality of surface waters. The increasing social importance of lakes and water systems has generated wide-ranging interest in lake restoration and management, concerning especially lakes suffering from eutrophication, but also from other environmental impacts. Most of the factors deteriorating the water quality in Finnish lakes are connected to human activities. Especially since the 1940's, the intensified farming practices and conduction of sewage waters from scattered settlements, cottages and industry have affected the lakes, which simultaneously have developed in to recreational areas for a growing number of people. Therefore, this study was focused on small lakes, which are human impacted, located close to settlement areas and have a significant value for local population. The aim of this thesis was to obtain information from lake sediment records for on-going lake restoration activities and to prove that a well planned, properly focused lake sediment study is an essential part of the work related to evaluation, target consideration and restoration of Finnish lakes. Altogether 11 lakes were studied. The study of Lake Kaljasjärvi was related to the gradual eutrophication of the lake. In lakes Ormajärvi, Suolijärvi, Lehee, Pyhäjärvi and Iso-Roine the main focus was on sediment mapping, as well as on the long term changes of the sedimentation, which were compared to Lake Pääjärvi. In Lake Hormajärvi the role of different kind of sedimentation environments in the eutrophication development of the lake's two basins were compared. Lake Orijärvi has not been eutrophied, but the ore exploitation and related acid main drainage from the catchment area have influenced the lake drastically and the changes caused by metal load were investigated. The twin lakes Etujärvi and Takajärvi are slightly eutrophied, but also suffer problems associated with the erosion of the substantial peat accumulations covering the fringe areas of the lakes. These peat accumulations are related to Holocene water level changes, which were investigated. The methods used were chosen case-specifically for each lake. In general, acoustic soundings of the lakes, detailed description of the nature of the sediment and determinations of the physical properties of the sediment, such as water content, loss on ignition and magnetic susceptibility were used, as was grain size analysis. A wide set of chemical analyses was also used. Diatom and chrysophycean cyst analyses were applied, and the diatom inferred total phosphorus content was reconstructed. The results of these studies prove, that the ideal lake sediment study, as a part of a lake management project, should be two-phased. In the first phase, thoroughgoing mapping of sedimentation patterns should be carried out by soundings and adequate corings. The actual sampling, based on the preliminary results, must include at least one long core from the main sedimentation basin for the determining the natural background state of the lake. The recent, artificially impacted development of the lake can then be determined by short-core and surface sediment studies. The sampling must be focused on the basis of the sediment mapping again, and it should represent all different sedimentation environments and bottom dynamic zones, considering the inlets and outlets, as well as the effects of possible point loaders of the lake. In practice, the budget of the lake management projects of is usually limited and only the most essential work and analyses can be carried out. The set of chemical and biological analyses and dating methods must therefore been thoroughly considered and adapted to the specific management problem. The results show also, that information obtained from a properly performed sediment study enhances the planning of the restoration, makes possible to define the target of the remediation activities and improves the cost-efficiency of the project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The removal of non-coding sequences, introns, is an essential part of messenger RNA processing. In most metazoan organisms, the U12-type spliceosome processes a subset of introns containing highly conserved recognition sequences. U12-type introns constitute less than 0,5% of all introns and reside preferentially in genes related to information processing functions, as opposed to genes encoding for metabolic enzymes. It has previously been shown that the excision of U12-type introns is inefficient compared to that of U2-type introns, supporting the model that these introns could provide a rate-limiting control for gene expression. The low efficiency of U12-type splicing is believed to have important consequences to gene expression by limiting the production of mature mRNAs from genes containing U12-type introns. The inefficiency of U12-type splicing has been attributed to the low abundance of the components of the U12-type spliceosome in cells, but this hypothesis has not been proven. The aim of the first part of this work was to study the effect of the abundance of the spliceosomal snRNA components on splicing. Cells with a low abundance of the U12-type spliceosome were found to inefficiently process U12-type introns encoded by a transfected construct, but the expression levels of endogenous genes were not found to be affected by the abundance of the U12-type spliceosome. However, significant levels of endogenous unspliced U12-type intron-containing pre-mRNAs were detected in cells. Together these results support the idea that U12-type splicing may limit gene expression in some situations. The inefficiency of U12-type splicing has also promoted the idea that the U12-type spliceosome may control gene expression, limiting the mRNA levels of some U12-type intron-containing genes. While the identities of the primary target genes that contain U12-type introns are relatively well known, little has previously been known about the downstream genes and pathways potentially affected by the efficiency of U12-type intron processing. Here, the effects of U12-type splicing efficiency on a whole organism were studied in a Drosophila line with a mutation in an essential U12-type spliceosome component. Genes containing U12-type introns showed variable gene-specific responses to the splicing defect, which points to variation in the susceptibility of different genes to changes in splicing efficiency. Surprisingly, microarray screening revealed that metabolic genes were enriched among downstream effects, and that the phenotype could largely be attributed to one U12-type intron-containing mitochondrial gene. Gene expression control by the U12-type spliceosome could thus have widespread effects on metabolic functions in the organism. The subcellular localization of the U12-type spliceosome components was studied as a response to a recent dispute on the localization of the U12-type spliceosome. All components studied were found to be nuclear indicating that the processing of U12-type introns occurs within the nucleus, thus clarifying a question central to the field. The results suggest that the U12-type spliceosome can limit the expression of genes that contain U12-type introns in a gene-specific manner. Through its limiting role in pre-mRNA processing, the U12-type splicing activity can affect specific genetic pathways, which in the case of Drosophila are involved in metabolic functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.