895 resultados para Other Computer Engineering


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major challenge of cardiac tissue engineering is directing cells to establish the physiological structure and function of the myocardium being replaced. In native heart, pacing cells generate electrical stimuli that spread throughout the heartcausing cell membrane depolarization and activation of contractile apparatus. We ought to examine whether electricalstimulation of adipose tissue-derived progenitor cells (ATDPCs) exerts phenotypic and genetic changes that enhance theircardiomyogenic potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual Laboratories are an indispensablespace for developing practical activities in a Virtual Environment. In the field of Computer and Software Engineering different types of practical activities have tobe performed in order to obtain basic competences which are impossible to achieve by other means. This paper specifies an ontology for a general virtual laboratory.The proposed ontology provides a mechanism to select the best resources needed in a Virtual Laboratory once a specific practical activity has been defined and the maincompetences that students have to achieve in the learning process have been fixed. Furthermore, the proposed ontology can be used to develop an automatic and wizardtool that creates a Moodle Classroom using the practical activity specification and the related competences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results and discussions in this thesis are based on my studies about selfassembled thiol layers on gold, platinum, silver and copper surfaces. These kinds of layers are two-dimensional, one molecule thick and covalently organized at the surface. They are an easy way to modify surface properties. Self-assembly is today an intensive research field because of the promise it holds for producing new technology at nanoscale, the scale of atoms and molecules. These kinds of films have applications for example, in the fields of physics, biology, engineering, chemistry and computer science. Compared to the extensive literature concerning self-assembled monolayers (SAMs) on gold, little is known about the structure and properties of thiolbased SAMs on other metals. In this thesis I have focused on thiol layers on gold, platinum, silver and copper substrates. These studies can be regarded as a basic study of SAMs. Nevertheless, an understanding of the physical and chemical nature of SAMs allows the correlation between atomic structure and macroscopic properties. The results can be used as a starting point for many practical applications. X-ray photoelectron spectroscopy (XPS) and synchrotron radiation excited high resolution photoelectron spectroscopy (HR-XPS) together with time-offlight secondary ion mass spectrometry (ToF-SIMS) were applied to investigate thin organic films formed by the spontaneous adsorption of molecules on metal surfaces. Photoelectron spectroscopy was the main method used in these studies. In photoelectron spectroscopy, the sample is irradiated with photons and emitted photoelectrons are energy-analyzed. The obtained spectra give information about the atomic composition of the surface and about the chemical state of the detected elements. It is widely used in the study of thin layers and is a very powerful tool for this purpose. Some XPS results were complemented with ToF-SIMS measurements. It provides information on the chemical composition and molecular structure of the samples. Thiol (1-Dodecanethiol, CH3(CH2)11SH) solution was used to create SAMs on metal substrates. Uniform layers were formed on most of the studied metal surfaces. On platinum, surface aligned molecules were also detected in investigations by XPS and ToF-SIMS. The influence of radiation on the layer structure was studied, leading to the conclusion that parts of the hydrocarbon chains break off due to radiation and the rest of the layer is deformed. The results obtained showed differences depending on the substrate material. The influence of oxygen on layer formation was also studied. Thiol molecules were found to replace some of the oxygen from the metal surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems suppliers are focal actors in mechanical engineering supply chains, in between general contractors and component suppliers. This research concentrates on the systems suppliers’ competitive flexibility, as a competitive advantage that the systems supplier gains from independence from the competitive forces of the market. The aim is to study the roles that power, dependence relations, social capital, and interorganizational learning have on the competitive flexibility. Research on this particular theme is scarce thus far. The research method applied here is the inductive multiple case study. Interviews from four case companies were used as main source of the qualitative data. The literature review presents previous literature on subcontracting, supply chain flexibility, supply chain relationships, social capital and interorganizational learning. The result of this study are seven propositions and consequently a model on the effects that the dominance of sales of few customers, power of competitors, significance of the manufactured system in the end product, professionalism in procurement and the significance of brand products in the business have on the competitive flexibility. These relationships are moderated by either social capital or interorganizational learning. The main results obtained from this study revolve around social capital and interorganizational learning, which have beneficial effects on systems suppliers’ competitive flexibility, by moderating the effects of other constructs of the model. Further research on this topic should include quantitative research to provide the extent to which the results can be reliably generalized. Also each construct of the model gives possible focus for more thorough research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computer is a useful tool in the teaching of upper secondary school physics, and should not have a subordinate role in students' learning process. However, computers and computer-based tools are often not available when they could serve their purpose best in the ongoing teaching. Another problem is the fact that commercially available tools are not usable in the way the teacher wants. The aim of this thesis was to try out a novel teaching scenario in a complicated subject in physics, electrodynamics. The didactic engineering of the thesis consisted of developing a computer-based simulation and training material, implementing the tool in physics teaching and investigating its effectiveness in the learning process. The design-based research method, didactic engineering (Artigue, 1994), which is based on the theoryof didactical situations (Brousseau, 1997), was used as a frame of reference for the design of this type of teaching product. In designing the simulation tool a general spreadsheet program was used. The design was based on parallel, dynamic representations of the physics behind the function of an AC series circuit in both graphical and numerical form. The tool, which was furnished with possibilities to control the representations in an interactive way, was hypothesized to activate the students and promote the effectiveness of their learning. An effect variable was constructed in order to measure the students' and teachers' conceptions of learning effectiveness. The empirical study was twofold. Twelve physics students, who attended a course in electrodynamics in an upper secondary school, participated in a class experiment with the computer-based tool implemented in three modes of didactical situations: practice, concept introduction and assessment. The main goal of the didactical situations was to have students solve problems and study the function of AC series circuits, taking responsibility for theirown learning process. In the teacher study eighteen Swedish speaking physics teachers evaluated the didactic potential of the computer-based tool and the accompanying paper-based material without using them in their physics teaching. Quantitative and qualitative data were collected using questionnaires, observations and interviews. The result of the studies showed that both the group of students and the teachers had generally positive conceptions of learning effectiveness. The students' conceptions were more positive in the practice situation than in the concept introduction situation, a setting that was more explorative. However, it turned out that the students' conceptions were also positive in the more complex assessment situation. This had not been hypothesized. A deeper analysis of data from observations and interviews showed that one of the students in each pair was more active than the other, taking more initiative and more responsibilityfor the student-student and student-computer interaction. These active studentshad strong, positive conceptions of learning effectiveness in each of the threedidactical situations. The group of less active students had a weak but positive conception in the first iv two situations, but a negative conception in the assessment situation, thus corroborating the hypothesis ad hoc. The teacher study revealed that computers were seldom used in physics teaching and that computer programs were in short supply. The use of a computer was considered time-consuming. As long as physics teaching with computer-based tools has to take place in special computer rooms, the use of such tools will remain limited. The affordance is enhanced when the physical dimensions as well as the performance of the computer are optimised. As a consequence, the computer then becomes a real learning tool for each pair of students, smoothly integrated into the ongoing teaching in the same space where teaching normally takes place. With more interactive support from the teacher, the computer-based parallel, dynamic representations will be efficient in promoting the learning process of the students with focus on qualitative reasoning - an often neglected part of the learning process of the students in upper secondary school physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earlier management studies have found a relationship between managerial qualities and subordinate impacts, but the effect of managers‘ social competence on leader perceptions has not been solidly established. To fill the related research gap, the present work embarks on a quantitative empirical effort to identify predictors of successful leadership. In particular, this study investigates relationships between perceived leader behavior and three selfreport instruments used to measure managerial capability: 1) the WOPI Work Personality Inventory, 2) Raven‘s general intelligence scale, and 3) the Emotive Communication Scale (ECS). This work complements previous research by resorting to both self-reports and other-reports: the results acquired from the managerial sample are compared to subordinate perceptions as measured through the ECS other-report and the WOPI360 multi-source appraisal. The quantitative research is comprised of a sample of 8o superiors and 354 subordinates operating in eight Finnish organizations. The strongest predictive value emerged from the ECS self- and other-reports and certain personality dimensions. In contrast, supervisors‘ logical intelligence did not correlate with leadership perceived as socially competent by subordinates. 16 of the superiors rated as most socially competent by their subordinates were selected for case analysis. Their qualitative narratives evidence the role of life history and post-traumatic growth in developing managerial skills. The results contribute to leadership theory in four ways. First, the ECS self-report devised for this research offers a reliable scale for predicting socially competent leader ability. Second, the work identifies dimensions of personality and emotive skills that can be considered predictors of managerial ability and benefited from in leader recruitment and career planning. Third, the Emotive Communication Model delineated on the basis of the empirical data allows for a systematic design and planning of communication and leadership education. Fourth, this workfurthers understanding of personal growth strategies and the role of life history in leader development and training. Finally, this research advances educational leadership by conceptualizing and operationalizing effective managerial communications. The Emotive Communication Model devised directs the pedagogic attention in engineering to assertion, emotional availability and inspiration skills. The proposed methodology addresses classroom management strategies drawing from problem-based learning, student empowerment, collaborative learning, and so-called socially competent teachership founded on teacher immediacy and perceived caring, all constituting strategies moving away from student compliance and teacher modelling. The ultimate educational objective embraces the development of individual engineers and organizational leaders that not only possess traditional analytical and technical expertise and substantive knowledge but are intelligent also creatively, practically, and socially.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The question of the trainability of executive functions and the impact of such training on related cognitive skills has stirred considerable research interest. Despite a number of studies investigating this, the question has not yet been solved. The general aim of this thesis was to investigate two very different types of training of executive functions: laboratory-based computerized training (Studies I-III) and realworld training through bilingualism (Studies IV-V). Bilingualism as a kind of training of executive functions is based on the idea that managing two languages requires executive resources, and previous studies have suggested a bilingual advantage in executive functions. Three executive functions were studied in the present thesis: updating of working memory (WM) contents, inhibition of irrelevant information, and shifting between tasks and mental sets. Studies I-III investigated the effects of computer-based training of WM updating (Study I), inhibition (Study II), and set shifting (Study III) in healthy young adults. All studies showed increased performance on the trained task. More importantly, improvement on an untrained task tapping the trained executive function (near transfer) was seen in Study I and II. None of the three studies showed improvement on untrained tasks tapping some other cognitive function (far transfer) as a result of training. Study I also used PET to investigate the effects of WM updating training on a neurotransmitter closely linked to WM, namely dopamine. The PET results revealed increased striatal dopamine release during WM updating performance as a result of training. Study IV investigated the ability to inhibit task-irrelevant stimuli in bilinguals and monolinguals by using a dichotic listening task. The results showed that the bilinguals exceeded the monolinguals in inhibiting task-irrelevant information. Study V introduced a new, complementary research approach to study the bilingual executive advantage and its underlying mechanisms. To circumvent the methodological problems related to natural groups design, this approach focuses only on bilinguals and examines whether individual differences in bilingual behavior correlate with executive task performances. Using measures that tap the three above-entioned executive functions, the results suggested that more frequent language switching was associated with better set shifting skills, and earlier acquisition of the second language was related to better inhibition skills. In conclusion, the present behavioral results showed that computer-based training of executive functions can improve performance on the trained task and on closely related tasks, but does not yield a more general improvement of cognitive skills. Moreover, the functional neuroimaging results reveal that WM training modulates striatal dopaminergic function, speaking for training-induced neural plasticity in this important neurotransmitter system. With regard to bilingualism, the results provide further support to the idea that bilingualism can enhance executive functions. In addition, the new complementary research approach proposed here provides some clues as to which aspects of everyday bilingual behavior may be related to the advantage in executive functions in bilingual individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis describes the development work performed on the leachand purification sections in the electrolytic zinc plant in Kokkola to increase the efficiency in these two stages, and thus the competitiveness of the plant. Since metallic zinc is a typical bulk product, the improvement of the competitiveness of a plant was mostly an issue of decreasing unit costs. The problems in the leaching were low recovery of valuable metals from raw materials, and that the available technology offered complicated and expensive processes to overcome this problem. In the purification, the main problem was consumption of zinc powder - up to four to six times the stoichiometric demand. This reduced the capacity of the plant as this zinc is re-circulated through the electrolysis, which is the absolute bottleneck in a zinc plant. Low selectivity gave low-grade and low-value precipitates for further processing to metallic copper, cadmium, cobalt and nickel. Knowledge of the underlying chemistry was poor and process interruptions causing losses of zinc production were frequent. Studies on leaching comprised the kinetics of ferrite leaching and jarosite precipitation, as well as the stability of jarosite in acidic plant solutions. A breakthrough came with the finding that jarosite could precipitate under conditions where ferrite would leach satisfactorily. Based on this discovery, a one-step process for the treatment of ferrite was developed. In the plant, the new process almost doubled the recovery of zinc from ferrite in the same equipment as the two-step jarosite process was operated in at that time. In a later expansion of the plant, investment savings were substantial compared to other technologies available. In the solution purification, the key finding was that Co, Ni, and Cu formed specific arsenides in the “hot arsenic zinc dust” step. This was utilized for the development of a three-step purification stage based on fluidized bed technology in all three steps, i.e. removal of Cu, Co and Cd. Both precipitation rates and selectivity increased, which strongly decreased the zinc powder consumption through a substantially suppressed hydrogen gas evolution. Better selectivity improved the value of the precipitates: cadmium, which caused environmental problems in the copper smelter, was reduced from 1-3% reported normally down to 0.05 %, and a cobalt cake with 15 % Co was easily produced in laboratory experiments in the cobalt removal. The zinc powder consumption in the plant for a solution containing Cu, Co, Ni and Cd (1000, 25, 30 and 350 mg/l, respectively), was around 1.8 g/l; i.e. only 1.4 times the stoichiometric demand – or, about 60% saving in powder consumption. Two processes for direct leaching of the concentrate under atmospheric conditions were developed, one of which was implemented in the Kokkola zinc plant. Compared to the existing pressure leach technology, savings were obtained mostly in investment. The scientific basis for the most important processes and process improvements is given in the doctoral thesis. This includes mathematical modeling and thermodynamic evaluation of experimental results and hypotheses developed. Five of the processes developed in this research and development program were implemented in the plant and are still operated. Even though these processes were developed with the focus on the plant in Kokkola, they can also be implemented at low cost in most of the zinc plants globally, and have thus a great significance in the development of the electrolytic zinc process in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Greenhouse gases emitted from energy production and transportation are dramatically changing the climate of Planet Earth. As a consequence, global warming is affecting the living conditions of numerous plant and animal species, including ours. Thus the development of sustainable and renewable liquid fuels is an essential global challenge in order to combat the climate change. In the past decades many technologies have been developed as alternatives to currently used petroleum fuels, such as bioethanol and biodiesel. However, even with gradually increasing production, the market penetration of these first generation biofuels is still relatively small compared to fossil fuels. Researchers have long ago realized that there is a need for advanced biofuels with improved physical and chemical properties compared to bioethanol and with biomass raw materials not competing with food production. Several target molecules have been identified as potential fuel candidates, such as alkanes, fatty acids, long carbon‐chain alcohols and isoprenoids. The current study focuses on the biosynthesis of butanol and propane as possible biofuels. The scope of this research was to investigate novel heterologous metabolic pathways and to identify bottlenecks for alcohol and alkane generation using Escherichia coli as a model host microorganism. The first theme of the work studied the pathways generating butyraldehyde, the common denominator for butanol and propane biosynthesis. Two ways of generating butyraldehyde were described, one via the bacterial fatty acid elongation machinery and the other via partial overexpression of the acetone‐butanol‐ethanol fermentation pathway found in Clostridium acetobutylicum. The second theme of the experimental work studied the reduction of butyraldehyde to butanol catalysed by various bacterial aldehyde‐reductase enzymes, whereas the final part of the work investigated the in vivo kinetics of the cyanobacterial aldehyde deformylating oxygenase (ADO) for the generation of hydrocarbons. The results showed that the novel butanol pathway, based on fatty acid biosynthesis consisting of an acyl‐ACP thioesterase and a carboxylic acid reductase, is tolerant to oxygen, thus being an efficient alternative to the previous Clostridial pathways. It was also shown that butanol can be produced from acetyl‐CoA using acetoacetyl CoA synthase (NphT7) or acetyl‐CoA acetyltransferase (AtoB) enzymes. The study also demonstrated, for the first time, that bacterial biosynthesis of propane is possible. The efficiency of the system is clearly limited by the poor kinetic properties of the ADO enzyme, and for proper function in vivo, the catalytic machinery requires a coupled electron relay system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.