30 resultados para First Nucleotide Change technology

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Landscape is shaped by natural environment and increasingly by human activity. In landscape ecology, the concept of landscape can be defined as a kilometre-scale mosaic formed by different land-use types. In Helsinki Metropolitan Region, the landscape change caused by urbanization has accelerated after the 1950s. Prior to that, the landscape of the region was mainly only shaped by agriculture. The goal of this study was in addition to describing the landscape change to discuss the factors impacting the landscape change and evaluate thelandscape ecological impacts of the change. Three study areas at different distances from Helsinki city centre were chosen in order to look at the landscape change. Study areas were Malmi, Espoo and Mäntsälä regions representing different parts of the urban-to-rural gradient in 1955, 1975, 1990 and 2009. Land-use of the maps was then digitized into five classes: agricultural lands, semi-natural grasslands, built areas, waters and others using GIS methods. First, landscape change was studied using landscape ecological indices. Indices used were PLAND i.e. the proportions of the different land-use types in the landscape; MPS, SHEI and SHDI which describe fragmentation and heterogeneity of the landscape; and MSI and ED which are measures of patch shape. Second, landscape change was studied statistically in relation to topography, soil and urban structure of the study areas. Indicators used concerning urban structure were number of residents, car ownership and travel-related zones of urban form which indicate the degree of urban sprawl within the study areas. For the statistical analyses, each of the 9.25 x 9.25 km sized study areas was further divided into grids with resolution of 0.25 x 0.25 kilometres. Third, the changes in the green structure of the study areas were evaluated. The landscape change reflected by the proportions of the land-use types was the most notable in Malmi area where a large amount of agricultural land was developed from 1955 to 2009. The proportion of semi-natural grasslands also showed an interesting pattern in relation to urbanization. When urbanization started, a great number of agricultural lands were abandoned and turned into semi-natural grasslands but as the urbanization accelerated, the number of semi-natural grasslands started to decline because of urban densification. Landscape fragmentation and heterogeneity were the most widespread in Espoo study area which is not only because of the great differences in relative heights within the region but also its location in the rural-urban fringe. According to the results, urbanization induced agricultural lands to be more regular in shape both spatially and temporally whereas for built areas and semi-natural grasslands the impact of urbanization was reverse. Changes in landscape were the most insignificant in the most rural study area Mäntsälä. In Mäntsälä, built area per resident showed the greatest values indicating a widespread urban sprawl. The values were the smallest in highly urbanized Malmi study area. Unlike other study areas, in Mäntsälä the proportion of developing land in the ecologically disadvantageous cardependent zone was on the increase. On the other hand, the green structure of the Mäntsälä study area was the most advantageous whereas Malmi study area showed the most ecologically disadvantageous structure. Considering all the landscape ecological criteria used, the landscape structure of Espoo study area proved to be the best not least because of the great heterogeneity of its landscape. Thus the study confirmed the previous results according to which landscape heterogeneity is the most significant in areas exposed to a moderate human impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The surface properties of solid state pharmaceutics are of critical importance. Processing modifies the surfaces and effects surface roughness, which influences the performance of the final dosage form in many different levels. Surface roughness has an effect on, e.g., the properties of powders, tablet compression and tablet coating. The overall goal of this research was to understand the surface structures of pharmaceutical surfaces. In this context the specific purpose was to compare four different analysing techniques (optical microscopy, scanning electron microscopy, laser profilometry and atomic force microscopy) in various pharmaceutical applications where the surfaces have quite different roughness scale. This was done by comparing the image and roughness analysing techniques using powder compacts, coated tablets and crystal surfaces as model surfaces. It was found that optical microscopy was still a very efficient technique, as it yielded information that SEM and AFM imaging are not able to provide. Roughness measurements complemented the image data and gave quantitative information about height differences. AFM roughness data represents the roughness of only a small part of the surface and therefore needs other methods like laser profilometer are needed to provide a larger scale description of the surface. The new developed roughness analysing method visualised surface roughness by giving detailed roughness maps, which showed local variations in surface roughness values. The method was able to provide a picture of the surface heterogeneity and the scale of the roughness. In the coating study, the laser profilometer results showed that the increase in surface roughness was largest during the first 30 minutes of coating when the surface was not yet fully covered with coating. The SEM images and the dispersive X-ray analysis results showed that the surface was fully covered with coating within 15 to 30 minutes. The combination of the different measurement techniques made it possible to follow the change of surface roughness and development of polymer coating. The optical imaging techniques gave a good overview of processes affecting the whole crystal surface, but they lacked the resolution to see small nanometer scale processes. AFM was used to visualize the nanoscale effects of cleaving and reveal the full surface heterogeneity, which underlies the optical imaging. Ethanol washing changed small (nanoscale) structure to some extent, but the effect of ethanol washing on the larger scale was small. Water washing caused total reformation of the surface structure at all levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study addresses four issues concerning technological product innovations. First, the nature of the very early phases or "embryonic stages" of technological innovation is addressed. Second, this study analyzes why and by what means people initiate innovation processes outside the technological community and the field of expertise of the established industry. In other words, this study addresses the initiation of innovation that occurs without the expertise of established organizations, such as technology firms, professional societies and research institutes operating in the technological field under consideration. Third, the significance of interorganizational learning processes for technological innovation is dealt with. Fourth, this consideration is supplemented by considering how network collaboration and learning change when formalized product development work and the commercialization of innovation advance. These issues are addressed through the empirical analysis of the following three product innovations: Benecol margarine, the Nordic Mobile Telephone system (NMT) and the ProWellness Diabetes Management System (PDMS). This study utilizes the theoretical insights of cultural-historical activity theory on the development of human activities and learning. Activity-theoretical conceptualizations are used in the critical assessment and advancement of the concept of networks of learning. This concept was originally proposed by the research group of organizational scientist Walter Powell. A network of learning refers to the interorganizational collaboration that pools resources, ideas and know-how without market-based or hierarchical relations. The concept of an activity system is used in defining the nodes of the networks of learning. Network collaboration and learning are analyzed with regard to the shared object of development work. According to this study, enduring dilemmas and tensions in activity explain the participants' motives for carrying out actions that lead to novel product concepts in the early phases of technological innovation. These actions comprise the initiation of development work outside the relevant fields of expertise and collaboration and learning across fields of expertise in the absence of market-based or hierarchical relations. These networks of learning are fragile and impermanent. This study suggests that the significance of networks of learning across fields of expertise becomes more and more crucial for innovation activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studying the continuity and underlying mechanisms of temperament change from early childhood through adulthood is clinically and theoretically relevant. Knowledge of the continuity and change of temperament from infancy onwards, especially as perceived by both parents is, however, still scanty. Only in recent years have researchers become aware that personality, long considered as stable in adulthood, may also change. Further, studies that focus on the transactional change of child temperament and parental personality also seem to be lacking, as are studies focusing on transactions between child temperament and more transient parental characteristics, like parental stress. Therefore, this longitudinal study examined the degree of continuity of temperament over five years from the infant s age of six months to the child s age of five and a half years, as perceived by both biological parents, and also investigated the bidirectional effects between child temperament and parents personality traits and overall stress experienced during that time. First, moderate to high levels of continuity of temperament from infancy to middle childhood were shown, depicting the developmental links between affectively positive and well-adjusted temperament characteristics, and between characteristics of early and later negative affectivity. The continuity of temperament was quantitatively and qualitatively similar in both parents ratings. The findings also demonstrate that infant and childhood temperament characteristics cluster to form stable temperament types that resemble personality types shown in child and adult personality studies. Second, the parental personality traits of extraversion and neuroticism were shown to be highly stable over five years, but evidence of change in relation to parents views of their child s temperament was also shown: an infant s higher positive affectivity predicted an increase in parental extraversion, while the infant s higher activity level predicted a decrease in parental neuroticism over five years. Furthermore, initially higher parental extraversion predicted higher ratings of the child s effortful control, while initially higher parental neuroticism predicted the child s higher negative affectivity. In terms of changes in parental stress, the infant s higher activity level predicted a decrease in maternal overall stress, while initially higher maternal stress predicted a higher level of child negative affectivity in middle childhood. Together, the results demonstrate that the mother- and father-rated temperament of the child shows continuity during the early years of life, but also support the view that the development of these characteristics is sensitive to important contextual factors such as parental personality and overall stress. While parental personality and experienced stress were shown to have an effect on the child s developing temperament, the reverse was also true: the parents own personality traits and perceived stress seemed to be highly stable, but also susceptible to their experiences of their child s temperament.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, effects of stimulus repetition and change in a continuous stimulus stream on the processing of somatosensory information in the human brain were studied. Human scalp-recorded somatosensory event-related potentials (ERPs) and magnetoencephalographic (MEG) responses rapidly diminished with stimulus repetition when mechanical or electric stimuli were applied to fingers. On the contrary, when the ERPs and multi-unit a ctivity (MUA) were directly recorded from the primary (SI) and secondary (SII) somatosensory cortices in a monkey, there was no marked decrement in the somatosensory responses as a function of stimulus repetition. These results suggest that this rate effect is not due to the response diminution in the SI and SII cortices. Obviously the responses to the first stimulus after a long "silent" period are nhanced due to unspecific initial orientation, originating in more broadly distributed and/or deeper neural structures, perhaps in the prefrontal cortices. With fast repetition rates not only the late unspecific but also some early specific somatosensory ERPs were diminished in amplitude. The fast decrease of the ERPs as a function of stimulus repetition is mainly due to the disappearance of the orientation effect and with faster repetition rates additively due to stimulus specific refractoriness. A sudden infrequent change in the continuous stimulus stream also enhanced somatosensory MEG responses to electric stimuli applied to different fingers. These responses were quite similar to those elicited by the deviant stimuli alone when the frequent standard stimuli were omitted. This enhancement was obviously due to the release from refractoriness because the neural structures generating the responses to the infrequent deviants had more time to recover from the refractoriness than the respective structures for the standards. Infrequent deviant mechanical stimuli among frequent standard stimuli also enhanced somatosensory ERPs and, in addition, they elicited a new negative wave which did not occur in the deviants-alone condition. This extra negativity could be recorded to deviations in the stimulation site and in the frequency of the vibratory stimuli. This response is probably a somatosensory analogue of the auditory mismatch negativity (MMN) which has been suggested to reflect a neural mismatch process between the sensory input and the sensory memory trace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strategies of scientific, question-driven inquiry are stated to be important cultural practices that should be educated in schools and universities. The present study focuses on investigating multiple efforts to implement a model of Progressive Inquiry and related Web-based tools in primary, secondary and university level education, to develop guidelines for educators in promoting students collaborative inquiry practices with technology. The research consists of four studies. In Study I, the aims were to investigate how a human tutor contributed to the university students collaborative inquiry process through virtual forums, and how the influence of the tutoring activities is demonstrated in the students inquiry discourse. Study II examined an effort to implement technology-enhanced progressive inquiry as a distance working project in a middle school context. Study III examined multiple teachers' methods of organizing progressive inquiry projects in primary and secondary classrooms through a generic analysis framework. In Study IV, a design-based research effort consisting of four consecutive university courses, applying progressive inquiry pedagogy, was retrospectively re-analyzed in order to develop the generic design framework. The results indicate that appropriate teacher support for students collaborative inquiry efforts appears to include interplay between spontaneity and structure. Careful consideration should be given to content mastery, critical working strategies or essential knowledge practices that the inquiry approach is intended to promote. In particular, those elements in students activities should be structured and directed, which are central to the aim of Progressive Inquiry, but which the students do not recognize or demonstrate spontaneously, and which are usually not taken into account in existing pedagogical methods or educational conventions. Such elements are, e.g., productive co-construction activities; sustained engagement in improving produced ideas and explanations; critical reflection of the adopted inquiry practices, and sophisticated use of modern technology for knowledge work. Concerning the scaling-up of inquiry pedagogy, it was concluded that one individual teacher can also apply the principles of Progressive Inquiry in his or her own teaching in many innovative ways, even under various institutional constraints. The developed Pedagogical Infrastructure Framework enabled recognizing and examining some central features and their interplay in the designs of examined inquiry units. The framework may help to recognize and critically evaluate the invisible learning-cultural conventions in various educational settings and can mediate discussions about how to overcome or change them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines supervisors' emerging new role in a technical customer service and home customers division of a large Finnish telecommunications corporation. Data of the study comes from a second-generation knowledge management project, an intervention research, which was conducted for supervisors of the division. The study exemplifies how supervision work is transforming in high technology organization characterized with high speed of change in technologies, products, and in grass root work practices. The intervention research was conducted in the division during spring 2000. Primary analyzed data consists of six two-hour videorecorded intervention sessions. Unit of analysis has been collective learningactions. Researcher has first written conversation transcripts out of the video-recorded meetings and then analyzed this qualitative data using analytical schema based on collective learning actions. Supervisors' role is conceptualized as an actor of a collective and dynamic activity system, based on the ideas from cultural historical activity theory. On knowledge management researcher has takena second-generation knowledge management viewpoint, following ideas fromcultural historical activity theory and developmental work research. Second-generation knowledge management considers knowledge embedded and constructed in collective practices, such as innovation networks or communities of practice (supervisors' work community), which have the capacity to create new knowledge. Analysis and illustration of supervisors' emerging new role is conceptualized in this framework using methodological ideas derived from activity theory and developmental work research. Major findings of the study show that supervisors' emerging new role in a high technology telecommunication organization characterized with high speed of discontinuous change in technologies, products, and in grass-root practices cannot be defined or characterized using a normative management role/model. Their role is expanding two-dimensionally, (1) socially and (2) in new knowledge, and work practices. The expansion in organization and inter-organizational network (social expansion) causes pressures to manage a network of co-operation partners and subordinates. On the other hand, the faster speed of change in technological solutions, new products, and novel customer wants (expansion in knowledge) causes pressures for supervisors to innovate quickly new work practices to manage this change. Keywords: Activity theory, knowledge management, developmental work research, supervisors, high technology organizations, telecommunication organizations, second-generation knowledge management, competence laboratory, intervention research, learning actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The leading cause of death in the Western world continues to be coronary heart disease (CHD). At the root of the disease process is dyslipidemia an aberration in the relevant amounts of circulating blood lipids. Cholesterol builds up in the arterial wall and following rupture of these plaques, myocardial infarction or stroke can occur. Heart disease runs in families and a number of hereditary forms are known. The leading cause of adult dyslipidemia presently however is overweight and obesity. This thesis work presents an investigation of the molecular genetics of common, hereditary dyslipidemia and the tightly related condition of obesity. Familial combined hyperlipidemia (FCHL) is the most common hereditary dyslipidemia in man with an estimated population prevalence of 1-6%. This complex disease is characterized by elevated levels of serum total cholesterol, triglycerides or both and is observed in about 20% of individuals with premature CHD. Our group identified the disease to be associated with genetic variation in the USF1 transcription factor gene. USF1 has a key role in regulating other genes that control lipid and glucose metabolism as well as the inflammatory response all central processes in the progression of atherosclerosis and CHD. The first two works of this thesis aimed at understanding how these USF1 variants result in increased disease risk. Among the many, non-coding single-nucleotide polymorphisms (SNPs) that associated with the disease, one was found to have a functional effect. The risk-enhancing allele of this SNP seems to eradicate the ability of the important hormone insulin to induce the expression of USF1 in peripheral tissues. The resultant changes in the expression of numerous USF1 target genes over time probably enhance and accelerate the atherogenic processes. Dyslipidemias often represent an outcome of obesity and in the final work of this thesis we wanted to address the metabolic pathways related to acquired obesity. It is recognized that active processes in adipose tissue play an important role in the development of dyslipidemia, insulin resistance and other pathological conditions associated with obesity. To minimize the confounding effects of genetic differences present in most human studies, we investigated a rare collection of identical twins that differed significantly in the amount of body fat. In the obese, but otherwise healthy young adults, several notable changes were observed. In addition to chronic inflammation, the adipose tissue of the obese co-twins was characterized by a marked (47%) decrease in amount of mitochondrial DNA (mtDNA) a change associated with mitochondrial dysfunction. The catabolism of branched chain amino acids (BCAAs) was identified as the most down-regulated process in the obese co-twins. A concordant increase in the serum level of these insulin secretagogues was identified. This hyperaminoacidemia may provide the feed-back signal from insulin resistant adipose tissue to the pancreas to ensure an appropriately augmented secretory response. The down regulation of BCAA catabolism correlated closely with liver fat accumulation and insulin. The single most up-regulated gene (5.9 fold) in the obese co-twins was osteopontin (SPP1) a cytokine involved in macrophage recruitment to adipose tissue. SPP1 is here implicated as an important player in the development of insulin resistance. These studies of exceptional study samples provide better understanding of the underlying pathology in common dyslipidemias and other obesity associated diseases important for future improvement of intervention strategies and treatments to combat atherosclerosis and coronary heart disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The United States is the world s single biggest market area, where the demand for graphic papers has increased by 80 % during the last three decades. However, during the last two decades there have been very big unpredictable changes in the graphic paper markets. For example, the consumption of newsprint started to decline from the late 1980 s, which was surprising compared to the historical consumption and projections. The consumption has declined since. The aim of this study was to see how magazine paper consumption will develop in the United States until 2030. The long-term consumption projection was made using mainly two methods. The first method was to use trend analysis to see how and if the consumption has changed since 1980. The second method was to use qualitative estimate. These estimates are then compared to the so-called classical model projections, which are usually mentioned and used in forestry literature. The purpose of the qualitative analysis is to study magazine paper end-use purposes and to analyze how and with what intensity the changes in society will effect to magazine paper consumption in the long-term. The framework of this study covers theories such as technology adaptation, electronic substitution, electronic publishing and Porter s threat of substitution. Because this study deals with markets, which have showed signs of structural change, a very substantial part of this study covers recent development and newest possible studies and statistics. The following were among the key findings of this study. Different end-uses have very different kinds of future. Electronic substitution is very likely in some end-use purposes, but not in all. Young people i.e. future consumers have very different manners, habits and technological opportunities than our parents did. These will have substantial effects in magazine paper consumption in the long-term. This study concludes to the fact that the change in magazine paper consumption is more likely to be gradual (evolutionary) than sudden collapse (revolutionary). It is also probable that the years of fast growing consumption of magazine papers are behind. Besides the decelerated growth, the consumption of magazine papers will decline slowly in the long-term. The decline will be faster depending on how far in the future we ll extend the study to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses the prehistoric human disturbance during the Holocene by means of case studies using detailed high-resolution pollen analysis from lake sediment. The four lakes studied are situated between 61o 40' and 61o 50' latitudes in the Finnish Karelian inland area and vary between 2.4 and 28.8 ha in size. The existence of Early Metal Age population was one important question. Another study question concerned the development of grazing, and the relationship between slash-and-burn cultivation and permanent field cultivation. The results were presented as pollen percentages and pollen concentrations (grains cm 3). Accumulation values (grains cm 2 yr 1) were calculated for Lake Nautajärvi and Lake Orijärvi sediment, where the sediment accumulation rate was precisely determined. Sediment properties were determined using loss-on-ignition (LOI) and magnetic susceptibility (k). Dating methods used include both conventional and AMS 14C determinations, paleomagnetic dating and varve choronology. The isolation of Lake Kirjavalampi on the northern shore of Lake Ladoga took place ca. 1460 1300 BC. The long sediment cores from Finland, Lake Kirkkolampi and Lake Orijärvi in southeastern Finland and Lake Nautajärvi in south central Finland all extended back to the Early Holocene and were isolated from the Baltic basin ca. 9600 BC, 8600 BC and 7675 BC, respectively. In the long sediment cores, the expansion of Alnus was visible between 7200 - 6840 BC. The spread of Tilia was dated in Lake Kirkkolampi to 6600 BC, in Lake Orijärvi to 5000 BC and at Lake Nautajärvi to 4600 BC. Picea is present locally in Lake Kirkkolampi from 4340 BC, in Lake Orijärvi from 6520 BC and in Lake Nautajärvi from 3500 BC onwards. The first modifications in the pollen data, apparently connected to anthropogenic impacts, were dated to the beginning of the Early Metal Period, 1880 1600 BC. Anthropogenic activity became clear in all the study sites by the end of the Early Metal Period, between 500 BC AD 300. According to Secale pollen, slash-and-burn cultivation was practised around the eastern study lakes from AD 300 600 onwards, and at the study site in central Finland from AD 880 onwards. The overall human impact, however, remained low in the studied sites until the Late Iron Age. Increasing human activity, including an increase in fire frequency was detected from AD 800 900 onwards in the study sites in eastern Finland. In Lake Kirkkolampi, this included cultivation on permanent fields, but in Lake Orijärvi, permanent field cultivation became visible as late as AD 1220, even when the macrofossil data demonstrated the onset of cultivation on permanent fields as early as the 7th century AD. On the northern shore of Lake Ladoga, local activity became visible from ca. AD 1260 onwards and at Lake Nautajärvi, sediment the local occupation was traceable from 1420 AD onwards. The highest values of Secale pollen were recorded both in Lake Orijärvi and Lake Kirjavalampi between ca. AD 1700 1900, and could be associated with the most intensive period of slash-and-burn from AD 1750 to 1850 in eastern Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transposons are mobile elements of genetic material that are able to move in the genomes of their host organisms using a special form of recombination called transposition. Bacteriophage Mu was the first transposon for which a cell-free in vitro transposition reaction was developed. Subsequently, the reaction has been refined and the minimal Mu in vitro reaction is useful in the generation of comprehensive libraries of mutant DNA molecules that can be used in a variety of applications. To date, the functional genetics applications of Mu in vitro technology have been subjected to either plasmids or genomic regions and entire genomes of viruses cloned on specific vectors. This study expands the use of Mu in vitro transposition in functional genetics and genomics by describing novel methods applicable to the targeted transgenesis of mouse and the whole-genome analysis of bacteriophages. The methods described here are rapid, efficient, and easily applicable to a wide variety of organisms, demonstrating the potential of the Mu transposition technology in the functional analysis of genes and genomes. First, an easy-to-use, rapid strategy to generate construct for the targeted mutagenesis of mouse genes was developed. To test the strategy, a gene encoding a neuronal K+/Cl- cotransporter was mutagenised. After a highly efficient transpositional mutagenesis, the gene fragments mutagenised were cloned into a vector backbone and transferred into bacterial cells. These constructs were screened with PCR using an effective 3D matrix system. In addition to traditional knock-out constructs, the method developed yields hypomorphic alleles that lead into reduced expression of the target gene in transgenic mice and have since been used in a follow-up study. Moreover, a scheme is devised to rapidly produce conditional alleles from the constructs produced. Next, an efficient strategy for the whole-genome analysis of bacteriophages was developed based on the transpositional mutagenesis of uncloned, infective virus genomes and their subsequent transfer into susceptible host cells. Mutant viruses able to produce viable progeny were collected and their transposon integration sites determined to map genomic regions nonessential to the viral life cycle. This method, applied here to three very different bacteriophages, PRD1, ΦYeO3 12, and PM2, does not require the target genome to be cloned and is directly applicable to all DNA and RNA viruses that have infective genomes. The method developed yielded valuable novel information on the three bacteriophages studied and whole-genome data can be complemented with concomitant studies on individual genes. Moreover, end-modified transposons constructed for this study can be used to manipulate genomes devoid of suitable restriction sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Glucocorticoid therapy is used worldwide to treat various inflammatory and immune conditions, including inflammatory bowel disease (IBD). In IBD, 80% of the patients obtain a positive response to the therapy; however the development of glucocorticoid-related side-effects is common. Our aim was therefore to study the possibility of optimizing glucocorticoid therapy in children and adolescents with IBD by measuring circulating glucocorticoid bioactivity (GBA) and serum glucocorticoid-responsive biomarkers in patients receiving steroid treatment for active disease. Methods: A total of sixty-nine paediatric IBD patients from the Paediatric Outpatient Clinics of the University Hospitals of Helsinki and Tampere participated in the studies. Control patients included 101 non-IBD patients and 41 disease controls in remission. In patients with active disease, blood samples were withdrawn before the glucocorticoid therapy was started, at 2-4 weeks after the initiation of the steroid and at 1-month intervals thereafter. Clinical response to glucocorticoid treatment and the development of steroid adverse events was carefully registered. GBA was analyzed with a COS-1 cell bioassay. The measured glucocorticoid therapy-responsive biomarkers included adipocyte-derived adiponectin and leptin, bone turnover-related collagen markers amino-terminal type I procollagen propeptide (PINP) and carboxyterminal telopeptide of type I collagen (ICTP) as well as insulin-like growth factor 1 (IGF-1) and sex hormone-binding globulin (SHBG), and inflammatory marker high-sensitivity C-reactive protein (hs-CRP). Results: The most promising marker for glucocorticoid sensitivity was serum adiponectin that associated with steroid therapy–related adverse events. Serum leptin indicated a similar trend. In contrast, circulating GBA rose in all subjects receiving glucocorticoid treatment but did not associate with the clinical response to steroids or with glucocorticoid therapy-related side-effects. Of notice, young patients (<10 years) showed similar GBA levels than older patients, despite receiving higher weight-adjusted doses of glucocorticoid. Markers of bone formation were lower in children with active IBD than in the control patients, probably reflecting the suppressive effect of the active inflammation. The onset of the glucocorticoid therapy further suppressed bone turnover. Inflammatory marker hs-CRP decreased readily after the initiation of the steroid, however the decrease did not associate with the clinical response to glucocorticoids. Conclusions: This is the first study to show that adipocyte-derived adiponectin associates with steroid therapy-induced side-effects. Further studies are needed, but it is possible that the adiponectin measurement could aid the recognition of glucocorticoid-sensitive patients in the future. GBA and the other markers reflecting glucocorticoid activity in different tissues changed during the treatment, however their change did not correlate with the therapeutic response to steroids or with the development of glucocorticoid-related side effects and therefore cannot guide the therapy in these patients. Studies such as as the present one that combine clinical data with newly developed biomolecular technology are needed to step-by-step build a general picture of the glucocorticoid actions in different tissues.