18 resultados para simplified CDD

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acute childhood osteomyelitis (OM), septic arthritis (SA), and their combination osteomyelitis with adjacent septic arthritis (OM+SA), are treated with long courses of antimicrobials and immediate surgery. We conducted a prospective multi-center randomized trial among Finnish children at age 3 months to 15 years in 1983-2005. According to the two-by-two factorial study design, children with OM or OM+SA received 20 or 30 days of antimicrobials, whereas those with SA were treated for 10 or 30 days. In addition, the whole series was randomized to be treated with clindamycin or a first-generation cephalosporin. Cases were included only if the causative agent was isolated. The treatment was instituted intravenously, but only for the first 2-4 days. Percutaneous aspiration was done to obtain a representative sample for bacteriology, but all other surgical intervention was kept at a minimum. A total of 265 patients fulfilled our strict inclusion criteria and were analyzed; 106 children had OM, 134 SA, and 25 OM+SA. In the OM group, one child in the long and one child in the short-term treatment group developed sequelae. One child with SA twice developed a late re-infection of the same joint, but the causative agents differed. Regarding surgery, diagnostic arthrocentesis or corticotomy was the only surgical procedure performed in most cases. Routine arthrotomy was not required even in hip arthritis. Serum C-reactive protein (CRP) proved to be a reliable laboratory index in the diagnosis and monitoring of osteoarticular infections. The recovery rate was similar regardless of whether clindamycin or a first-generation cephalosporin was used. We conclude that a course of 20 days of these well-absorbing antimicrobials is sufficient for OM or OM+SA, and 10 days for SA in most cases beyond the neonatal age. A short intravenous phase of only 2-5 days often suffices. CRP gives valuable information in monitoring the course of illness. Besides diagnostic aspiration, surgery should be reserved for selected cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Acute bacterial meningitis (BM) continues to be an important cause of childhood mortality and morbidity, especially in developing countries. Prognostic scales and the identification of risk factors for adverse outcome both aid in assessing disease severity. New antimicrobial agents or adjunctive treatments - except for oral glycerol - have essentially failed to improve BM prognosis. A retrospective observational analysis found paracetamol beneficial in adult bacteraemic patients, and some experts recommend slow β-lactam infusion. We examined these treatments in a prospective, double-blind, placebo-controlled clinical trial. Patients and methods A retrospective analysis included 555 children treated for BM in 2004 in the infectious disease ward of the Paediatric Hospital of Luanda, Angola. Our prospective study randomised 723 children into four groups, to receive a combination of cefotaxime infusion or boluses every 6 hours for the first 24 hours and oral paracetamol or placebo for 48 hours. The primary endpoints were 1) death or severe neurological sequelae (SeNeSe), and 2) deafness. Results In the retrospective study, the mortality of children with blood transfusion was 23% (30 of 128) vs. without blood transfusion 39% (109 of 282; p=0.004). In the prospective study, 272 (38%) of the children died. Of those 451 surviving, 68 (15%) showed SeNeSe, and 12% (45 of 374) were deaf. Whereas no difference between treatment groups was observable in primary endpoints, the early mortality in the infusion-paracetamol group was lower, with the difference (Fisher s exact test) from the other groups at 24, 48, and 72 hours being significant (p=0.041, 0.0005, and 0.005, respectively). Prognostic factors for adverse outcomes were impaired consciousness, dyspnoea, seizures, delayed presentation, and absence of electricity at home (Simple Luanda Scale, SLS); the Bayesian Luanda Scale (BLS) also included abnormally low or high blood glucose. Conclusions New studies concerning the possible beneficial effect of blood transfusion, and concerning longer treatment with cefotaxime infusion and oral paracetamol, and a study to validate our simple prognostic scales are warranted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Buffer zones are vegetated strip-edges of agricultural fields along watercourses. As linear habitats in agricultural ecosystems, buffer strips dominate and play a leading ecological role in many areas. This thesis focuses on the plant species diversity of the buffer zones in a Finnish agricultural landscape. The main objective of the present study is to identify the determinants of floral species diversity in arable buffer zones from local to regional levels. This study was conducted in a watershed area of a farmland landscape of southern Finland. The study area, Lepsämänjoki, is situated in the Nurmijärvi commune 30 km to the north of Helsinki, Finland. The biotope mosaics were mapped in GIS. A total of 59 buffer zones were surveyed, of which 29 buffer strips surveyed were also sampled by plot. Firstly, two diversity components (species richness and evenness) were investigated to determine whether the relationship between the two is equal and predictable. I found no correlation between species richness and evenness. The relationship between richness and evenness is unpredictable in a small-scale human-shaped ecosystem. Ordination and correlation analyses show that richness and evenness may result from different ecological processes, and thus should be considered separately. Species richness correlated negatively with phosphorus content, and species evenness correlated negatively with the ratio of organic carbon to total nitrogen in soil. The lack of a consistent pattern in the relationship between these two components may be due to site-specific variation in resource utilization by plant species. Within-habitat configuration (width, length, and area) were investigated to determine which is more effective for predicting species richness. More species per unit area increment could be obtained from widening the buffer strip than from lengthening it. The width of the strips is an effective determinant of plant species richness. The increase in species diversity with an increase in the width of buffer strips may be due to cross-sectional habitat gradients within the linear patches. This result can serve as a reference for policy makers, and has application value in agricultural management. In the framework of metacommunity theory, I found that both mass effect(connectivity) and species sorting (resource heterogeneity) were likely to explain species composition and diversity on a local and regional scale. The local and regional processes were interactively dominated by the degree to which dispersal perturbs local communities. In the lowly and intermediately connected regions, species sorting was of primary importance to explain species diversity, while the mass effect surpassed species sorting in the highly connected region. Increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities, and consequently, to lower regional diversity, while local species richness was unrelated to the habitat connectivity. Of all species found, Anthriscus sylvestris, Phalaris arundinacea, and Phleum pretense significantly responded to connectivity, and showed high abundance in the highly connected region. We suggest that these species may play a role in switching the force from local resources to regional connectivity shaping the community structure. On the landscape context level, the different responses of local species richness and evenness to landscape context were investigated. Seven landscape structural parameters served to indicate landscape context on five scales. On all scales but the smallest scales, the Shannon-Wiener diversity of land covers (H') correlated positively with the local richness. The factor (H') showed the highest correlation coefficients in species richness on the second largest scale. The edge density of arable field was the only predictor that correlated with species evenness on all scales, which showed the highest predictive power on the second smallest scale. The different predictive power of the factors on different scales showed a scaledependent relationship between the landscape context and local plant species diversity, and indicated that different ecological processes determine species richness and evenness. The local richness of species depends on a regional process on large scales, which may relate to the regional species pool, while species evenness depends on a fine- or coarse-grained farming system, which may relate to the patch quality of the habitats of field edges near the buffer strips. My results suggested some guidelines of species diversity conservation in the agricultural ecosystem. To maintain a high level of species diversity in the strips, a high level of phosphorus in strip soil should be avoided. Widening the strips is the most effective mean to improve species richness. Habitat connectivity is not always favorable to species diversity because increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities (beta diversity) and, consequently, to lower regional diversity. Overall, a synthesis of local and regional factors emerged as the model that best explain variations in plant species diversity. The studies also suggest that the effects of determinants on species diversity have a complex relationship with scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis was to develop measurement techniques and systems for measuring air quality and to provide information about air quality conditions and the amount of gaseous emissions from semi-insulated and uninsulated dairy buildings in Finland and Estonia. Specialization and intensification in livestock farming, such as in dairy production, is usually accompanied by an increase in concentrated environmental emissions. In addition to high moisture, the presence of dust and corrosive gases, and widely varying gas concentrations in dairy buildings, Finland and Estonia experience winter temperatures reaching below -40 ºC and summer temperatures above +30 ºC. The adaptation of new technologies for long-term air quality monitoring and measurement remains relatively uncommon in dairy buildings because the construction and maintenance of accurate monitoring systems for long-term use are too expensive for the average dairy farmer to afford. Though the documentation of accurate air quality measurement systems intended mainly for research purposes have been made in the past, standardised methods and the documentation of affordable systems and simple methods for performing air quality and emissions measurements in dairy buildings are unavailable. In this study, we built three measurement systems: 1) a Stationary system with integrated affordable sensors for on-site measurements, 2) a Wireless system with affordable sensors for off-site measurements, and 3) a Mobile system consisting of expensive and accurate sensors for measuring air quality. In addition to assessing existing methods, we developed simplified methods for measuring ventilation and emission rates in dairy buildings. The three measurement systems were successfully used to measure air quality in uninsulated, semi-insulated, and fully-insulated dairy buildings between the years 2005 and 2007. When carefully calibrated, the affordable sensors in the systems gave reasonably accurate readings. The spatial air quality survey showed high variation in microclimate conditions in the dairy buildings measured. The average indoor air concentration for carbon dioxide was 950 ppm, for ammonia 5 ppm, for methane 48 ppm, for relative humidity 70%, and for inside air velocity 0.2 m/s. The average winter and summer indoor temperatures during the measurement period were -7º C and +24 ºC for the uninsulated, +3 ºC and +20 ºC for the semi-insulated and +10 ºC and +25 ºC for the fully-insulated dairy buildings. The measurement results showed that the uninsulated dairy buildings had lower indoor gas concentrations and emissions compared to fully insulated buildings. Although occasionally exceeded, the ventilation rates and average indoor air quality in the dairy buildings were largely within recommended limits. We assessed the traditional heat balance, moisture balance, carbon dioxide balance and direct airflow methods for estimating ventilation rates. The direct velocity measurement for the estimation of ventilation rate proved to be impractical for naturally ventilated buildings. Two methods were developed for estimating ventilation rates. The first method is applicable in buildings in which the ventilation can be stopped or completely closed. The second method is useful in naturally ventilated buildings with large openings and high ventilation rates where spatial gas concentrations are heterogeneously distributed. The two traditional methods (carbon dioxide and methane balances), and two newly developed methods (theoretical modelling using Fick s law and boundary layer theory, and the recirculation flux-chamber technique) were used to estimate ammonia emissions from the dairy buildings. Using the traditional carbon dioxide balance method, ammonia emissions per cow from the dairy buildings ranged from 7 g day-1 to 35 g day-1, and methane emissions per cow ranged from 96 g day-1 to 348 g day-1. The developed methods proved to be as equally accurate as the traditional methods. Variation between the mean emissions estimated with the traditional and the developed methods was less than 20%. The developed modelling procedure provided sound framework for examining the impact of production systems on ammonia emissions in dairy buildings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Yersinia enterocolitica and Yersinia pseudotuberculosis are among the major enteropathogenic bacteria causing infections in humans in many industrialized countries. In Finland, Y. pseudotuberculosis has caused 10 outbreaks among humans during 1997-2008. Some of these outbreaks have been very extensive involving over 400 cases; mainly children attending schools and day-care. Y. enterocolitica, on the contrary, has caused mainly a large number of sporadic human infections in Finland. Y. pseudotuberculosis is widespread in nature, causing infections in a variety of domestic and wild animals. Foodborne transmission of human infections has long been suspected, however, attempts to trace the pathogen have been unsuccessful before this study that epidemiologically linked Y. pseudotuberculosis to a specific food item. Furthermore, due to modern food distribution systems, foodborne outbreaks usually involve many geographically separate infection clusters difficult to identify as part of the same outbreak. Among pathogenic Y. enterocolitica, the global predominance of one genetically homogeneous type (bioserotype 4/O:3) is a challenge to the development of genetic typing methods discriminatory enough for epidemiological purposes, for example, for tracing back to the sources of infections. Furthermore, the diagnostics of Y. enterocolitica infections is hampered because clinical laboratories easily misidentify some other members of the Yersinia species (Y. enterocolitica–like species) as Y. enterocolitica. This results in misleading information on the prevalence and clinical significance of various Yersinia isolates. The aim of this study was to develop and optimize molecular typing methods to be used in epidemiological investigations of Y. enterocolitica and Y. pseudotuberculosis, particularly in active surveillance and outbreak investigations of Y. pseudotuberculosis isolates. The aim was also to develop a simplified set of phenotypic tests that could be used in routine diagnostic laboratories for the correct identification of Y. enterocolitica and Y. enterocolitica –like species. A PFGE method designed here for typing of Y. pseudotuberculosis was efficient in linking the geographically dispersed and apparently unrelated Y. pseudotuberculosis infections as parts of the same outbreak. It proved to be useful in active laboratory-based surveillance of Y. pseudotuberculosis outbreaks. Throughout the study period, information about the diversity of genotypes among outbreak and non-outbreak related strains of human origin was obtained. Also, to our knowledge, this was the first study to epidemiologically link a Y. pseudotuberculosis outbreak of human illnesses to a specific food item, iceberg lettuce. A novel epidemiological typing method based on the use of a repeated genomic region (YeO:3RS) as a probe was developed for the detection and differentiation between strains of Y. enterocolitica subspecies palearctica. This method was able to increase the discrimination in a set of 106 previously PFGE typed Finnish Y. enterocolitica bioserotype 4/O:3 strains among which two main PFGE genotypes had prevailed. The developed simplified method was a more reliable tool than the commercially available biochemical test kits for differentiation between Y. enterocolitica and Y. enterocolitica –like species. In Finland, the methods developed for Y. enterocolitica and Y. pseudotuberculosis have been used to improve the identification protocols and in subsequent outbreak investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Segmentation is a data mining technique yielding simplified representations of sequences of ordered points. A sequence is divided into some number of homogeneous blocks, and all points within a segment are described by a single value. The focus in this thesis is on piecewise-constant segments, where the most likely description for each segment and the most likely segmentation into some number of blocks can be computed efficiently. Representing sequences as segmentations is useful in, e.g., storage and indexing tasks in sequence databases, and segmentation can be used as a tool in learning about the structure of a given sequence. The discussion in this thesis begins with basic questions related to segmentation analysis, such as choosing the number of segments, and evaluating the obtained segmentations. Standard model selection techniques are shown to perform well for the sequence segmentation task. Segmentation evaluation is proposed with respect to a known segmentation structure. Applying segmentation on certain features of a sequence is shown to yield segmentations that are significantly close to the known underlying structure. Two extensions to the basic segmentation framework are introduced: unimodal segmentation and basis segmentation. The former is concerned with segmentations where the segment descriptions first increase and then decrease, and the latter with the interplay between different dimensions and segments in the sequence. These problems are formally defined and algorithms for solving them are provided and analyzed. Practical applications for segmentation techniques include time series and data stream analysis, text analysis, and biological sequence analysis. In this thesis segmentation applications are demonstrated in analyzing genomic sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usual task in music information retrieval (MIR) is to find occurrences of a monophonic query pattern within a music database, which can contain both monophonic and polyphonic content. The so-called query-by-humming systems are a famous instance of content-based MIR. In such a system, the user's hummed query is converted into symbolic form to perform search operations in a similarly encoded database. The symbolic representation (e.g., textual, MIDI or vector data) is typically a quantized and simplified version of the sampled audio data, yielding to faster search algorithms and space requirements that can be met in real-life situations. In this thesis, we investigate geometric approaches to MIR. We first study some musicological properties often needed in MIR algorithms, and then give a literature review on traditional (e.g., string-matching-based) MIR algorithms and novel techniques based on geometry. We also introduce some concepts from digital image processing, namely the mathematical morphology, which we will use to develop and implement four algorithms for geometric music retrieval. The symbolic representation in the case of our algorithms is a binary 2-D image. We use various morphological pre- and post-processing operations on the query and the database images to perform template matching / pattern recognition for the images. The algorithms are basically extensions to classic image correlation and hit-or-miss transformation techniques used widely in template matching applications. They aim to be a future extension to the retrieval engine of C-BRAHMS, which is a research project of the Department of Computer Science at University of Helsinki.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main obstacle for the application of high quality diamond-like carbon (DLC) coatings has been the lack of adhesion to the substrate as the coating thickness is increased. The aim of this study was to improve the filtered pulsed arc discharge (FPAD) method. With this method it is possible to achieve high DLC coating thicknesses necessary for practical applications. The energy of the carbon ions was measured with an optoelectronic time-of-flight method. An in situ cathode polishing system used for stabilizing the process yield and the carbon ion energies is presented. Simultaneously the quality of the coatings can be controlled. To optimise the quality of the deposition process a simple, fast and inexpensive method using silicon wafers as test substrates was developed. This method was used for evaluating the suitability of a simplified arc-discharge set-up for the deposition of the adhesion layer of DLC coatings. A whole new group of materials discovered by our research group, the diamond-like carbon polymer hybrid (DLC-p-h) coatings, is also presented. The parent polymers used in these novel coatings were polydimethylsiloxane (PDMS) and polytetrafluoroethylene (PTFE). The energy of the plasma ions was found to increase when the anode-cathode distance and the arc voltage were increased. A constant deposition rate for continuous coating runs was obtained with an in situ cathode polishing system. The novel DLC-p-h coatings were found to be water and oil repellent and harder than any polymers. The lowest sliding angle ever measured from a solid surface, 0.15 ± 0.03°, was measured on a DLC-PDMS-h coating. In the FPAD system carbon ions can be accelerated to high energies (≈ 1 keV) necessary for the optimal adhesion (the substrate is broken in the adhesion and quality test) of ultra thick (up to 200 µm) DLC coatings by increasing the anode-cathode distance and using high voltages (up to 4 kV). An excellent adhesion can also be obtained with the simplified arc-discharge device. To maintain high process yield (5µm/h over a surface area of 150 cm2) and to stabilize the carbon ion energies and the high quality (sp3 fraction up to 85%) of the resulting coating, an in situ cathode polishing system must be used. DLC-PDMS-h coating is the superior candidate coating material for anti-soiling applications where also hardness is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When heated to high temperatures, the behavior of matter changes dramatically. The standard model fields go through phase transitions, where the strongly interacting quarks and gluons are liberated from their confinement to hadrons, and the Higgs field condensate melts, restoring the electroweak symmetry. The theoretical framework for describing matter at these extreme conditions is thermal field theory, combining relativistic field theory and quantum statistical mechanics. For static observables the physics is simplified at very high temperatures, and an effective three-dimensional theory can be used instead of the full four-dimensional one via a method called dimensional reduction. In this thesis dimensional reduction is applied to two distinct problems, the pressure of electroweak theory and the screening masses of mesonic operators in quantum chromodynamics (QCD). The introductory part contains a brief review of finite-temperature field theory, dimensional reduction and the central results, while the details of the computations are contained in the original research papers. The electroweak pressure is shown to converge well to a value slightly below the ideal gas result, whereas the pressure of the full standard model is dominated by the QCD pressure with worse convergence properties. For the mesonic screening masses a small positive perturbative correction is found, and the interpretation of dimensional reduction on the fermionic sector is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article discusses the scope of research on the application of information technology in construction (ITC). A model of the information and material activities which together constitute the construction process is presented, using the IDEF0 activity modelling methodology. Information technology is defined to include all kinds of technology used for the storage, transfer and manipulation of information, thus also including devices such as copying machines, faxes and mobile phones. Using the model the domain of ITC research is defined as the use of information technology to facilitate and re-engineer the information process component of construction. Developments during the last decades in IT use in construction is discussed against a background of a simplified model of generic information processing tasks. The scope of ITC is compared with the scopes of research in related areas such as design methodology, construction management and facilities management. Health care is proposed as an interesting alternative (to the often used car manufacturing industry), as an IT application domain to compare with. Some of the key areas of ITC research in recent years; expert systems, company IT strategies, and product modelling are shortly discussed. The article finishes with a short discussion of the problems of applying standard scientific methodology in ITC research, in particular in product model research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article outlines a conceptual history of narrative, in particular the changes over the movement called “narrative turn” in the social sciences. According to Quentin Skinner, conceptual changes may take place on three separate levels: by changing the criteria of the concept, by changing the range of reference, and by changing the appraisal of the concept. Recent theorizing on narrative epitomizes all of these levels, but unevenly. In spite of the rhetoric of interdisciplinarity, two almost totally separate traditions of narrative theory persist: the narratological and the narrative-turn theories. Paradoxically, the narrative turn literature has radicalized the range of reference of narrative by attaching the concept to life and identity, but has left the criteria of the concept practically intact. This has extended the reign of a simplified Aristotelian concept of narrative as a chain of beginnings, middles, and ends. The narratological tradition of theorizing, instead, has debated extensively on the correct criteria of the concept, but enlarged the range of reference rather in the direction of cognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simplified model of human tear fluid (TF) is a three-layered structure composed of a homogenous gel-like layer of hydrated mucins, an aqueous phase, and a lipid-rich outermost layer found in the tear-air interface. It is assumed that amphiphilic phospholipids are found adjacent to the aqueous-mucin layer and externally to this a layer composed of non-polar lipids face the tear-air interface. The lipid layer prevents evaporation of the TF and protects the eye, but excess accumulation of lipids may lead to drying of the corneal epithelium. Thus the lipid layer must be controlled and maintained by some molecular mechanisms. In the circulation, phospholipid transfer protein (PLTP) and cholesteryl ester transfer protein (CETP) mediate lipid transfers. The aim of this thesis was to investigate the presence and molecular mechanisms of lipid transfer proteins in human TF. The purpose was also to study the role of these proteins in the development of dry eye syndrome (DES). The presence of TF PLTP and CETP was studied by western blotting and mass spectrometry. The concentration of these proteins was determined by ELISA. The activities of the enzymes were determined by specific lipid transfer assays. To study the molecular mechanisms involved in PLTP mediated lipid transfer Langmuir monolayers and asymmetrical flow field-flow fractionation (AsFlFFF) was used. Ocular tissue samples were stained with monoclonal antibodies against PLTP to study the secretion route of PLTP. Heparin-Sepharose affinity chromatography was used for PLTP pull-down experiments and co-eluted proteins were identified with MALDI-TOF mass spectrometry or Western blot analysis. To study whether PLTP plays any functional role in TF PLTP-deficient mice were examined. The activity of PLTP was also studied in dry eye patients. PLTP is a component of normal human TF, whereas CETP is not. TF PLTP concentration was about 2-fold higher than that in human plasma. Inactivation of PLTP by heat treatment or immunoinhibition abolished the phospholipid transfer activity in tear fluid. PLTP was found to be secreted from lacrimal glands. PLTP seems to be surface active and is capable of accepting lipid molecules without the presence of lipid-protein complexes. The active movement of radioactively labeled lipids and high activity form of PLTP to acceptor particles suggested a shuttle model of PLTP-mediated lipid transfer. In this model, PLTP physically transports lipids between the donor and acceptor. Protein-protein interaction assays revealed ocular mucins as PLTP interaction partners in TF. In mice with a full deficiency of functional PLTP enhanced corneal epithelial damage, increased corneal permeability to carboxyfluorescein, and decreased corneal epithelial occludin expression was demonstrated. Increased tear fluid PLTP activity was observed among human DES patients. These results together suggest a scavenger property of TF PLTP: if the corneal epithelium is contaminated by hydrophobic material, PLTP could remove them and transport them to the superficial layer of the TF or, alternatively, transport them through the naso-lacrimal duct. Thus, PLTP might play an integral role in tear lipid trafficking and in the protection of the corneal epithelium. The increased PLTP activity in human DES patients suggests an ocular surface protective role for this lipid transfer protein.