14 resultados para STEPWISE

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis different parameters influencing critical flux in protein ultrafiltration and membrane foul-ing were studied. Short reviews of proteins, cross-flow ultrafiltration, flux decline and criticalflux and the basic theory of Partial Least Square analysis (PLS) are given at the beginning. The experiments were mainly performed using dilute solutions of globular proteins, commercial polymeric membranes and laboratory scale apparatuses. Fouling was studied by flux, streaming potential and FTIR-ATR measurements. Critical flux was evaluated by different kinds of stepwise procedures and by both con-stant pressure and constant flux methods. The critical flux was affected by transmembrane pressure, flow velocity, protein concentration, mem-brane hydrophobicity and protein and membrane charges. Generally, the lowest critical fluxes were obtained at the isoelectric points of the protein and the highest in the presence of electrostatic repulsion between the membrane surface and the protein molecules. In the laminar flow regime the critical flux increased with flow velocity, but not any more above this region. An increase in concentration de-creased the critical flux. Hydrophobic membranes showed fouling in all charge conditionsand, furthermore, especially at the beginning of the experiment even at very low transmembrane pressures. Fouling of these membranes was thought to be due to protein adsorption by hydrophobic interactions. The hydrophilic membranes used suffered more from reversible fouling and concentration polarisation than from irreversible foul-ing. They became fouled at higher transmembrane pressures becauseof pore blocking. In this thesis some new aspects on critical flux are presented that are important for ultrafiltration and fractionation of proteins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lipopolysacharide (LPS) present on the outer leaflet of Gram-negative bacteria is important for the adaptation of the bacteria to the environment. Structurally, LPS can be divided into three parts: lipid A, core and O-polysaccharide (OPS). OPS is the outermost and also the most diverse moiety. When OPS is composed of identical sugar residues it is called homopolymeric and when it is composed of repeating units of oligosaccharides it is called heteropolymeric. Bacteria synthesize LPS at the inner membrane via two separate pathways, Lipid A-core via one and OPS via the other. These are ligated together in the periplasmic space and the completed LPS molecule is translocated to the surface of the bacteria. The genes directing the OPS biosynthesis are often clustered and the clusters directing the biosynthesis of heteropolymeric OPS often contain genes for i) the biosynthesis of required NDP-sugar precursors, ii) glycosyltransferases needed to build up the repeating unit, iii) translocation of the completed O-unit to the periplasmic side of the inner membrane (flippase) and iv) polymerization of the repeating units to complete OPS. The aim of this thesis was to characterize the biosynthesis of the outer core (OC) of Yersinia enterocolitica serotype O:3 (YeO3). Y. enterocolitica is a member of the Gram-negative Yersinia genus and it causes diarrhea followed sometimes by reactive arthritis. The chemical structure of the OC and the nucleotide sequence of the gene cluster directing its biosynthesis were already known; however, no experimental evidence had been provided for the predicted functions of the gene products. The hypothesis was that the OC biosynthesis would follow the pathway described for heteropolymeric OPS, i.e. a Wzy-dependent pathway. In this work the biochemical activities of two enzymes involved in the NDP-sugar biosynthesis was established. Gne was determined to be a UDP-N-acetylglucosamine-4-epimerase catalyzing the conversion of UDP-GlcNAc to UDP-GalNAc and WbcP was shown to be a UDP-GlcNAc- 4,6-dehydratase catalyzing the reaction that converts UDP-GlcNAc to a rare UDP-2-acetamido- 2,6-dideoxy-d-xylo-hex-4-ulopyranose (UDP-Sugp). In this work, the linkage specificities and the order in which the different glycosyltransferases build up the OC onto the lipid carrier were also investigated. In addition, by using a site-directed mutagenesis approach the catalytically important amino acids of Gne and two of the characterized glycosyltranferases were identified. Also evidence to show the enzymes involved in the ligations of OC and OPS to the lipid A inner core was provided. The importance of the OC to the physiology of Y. enterocolitica O:3 was defined by determining the minimum requirements for the OC to be recognized by a bacteriophage, bacteriocin and monoclonal antibody. The biological importance of the rare keto sugar (Sugp) was also shown. As a conclusion this work provides an extensive overview of the biosynthesis of YeO3 OC as it provides a substantial amount of information of the stepwise and coordinated synthesis of the Ye O:3 OC hexasaccharide and detailed information of its properties as a receptor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alnumycin A is an aromatic pyranonaphthoquinone (PNQ) polyketide closely related to the model compound actinorhodin. While some PNQ polyketides are glycosylated, alnumycin A contains a unique sugar-like dioxane moiety. This unusual structural feature made alnumycin A an interesting research target, since no information was available about its biosynthesis. Thus, the main objective of the thesis work became to identify the steps and the enzymes responsible for the biosynthesis of the dioxane moiety. Cloning, sequencing and heterologous expression of the complete alnumycin gene cluster from Streptomyces sp. CM020 enabled the inactivation of several alnumycin biosynthetic genes and preliminary identification of the gene products responsible for pyran ring formation, quinone formation and dioxane biosynthesis. The individual deletions of the genes resulted in the production of several novel metabolites, which in many cases turned out to be pathway intermediates and could be used for stepwise enzymatic reconstruction of the complete dioxane biosynthetic pathway in vitro. Furthermore, the in vitro reactions with purified alnumycin biosynthetic enzymes resulted in the production of other novel compounds, both pathway intermediates and side products. Identification and molecular level studies of the enzymes AlnA and AlnB catalyzing the first step of dioxane biosynthesis – an unusual C-ribosylation step – led to a mechanistic proposal for the C-ribosylation of the polyketide aglycone. The next step on the dioxane biosynthetic pathway was found to be the oxidative conversion of the attached ribose into a highly unusual dioxolane unit by Aln6 belonging to an uncharacterized protein family, which unexpectedly occurred without any apparent cofactors. Finally, the last step of the pathway was found to be catalyzed by the NADPH-dependent reductase Aln4, which is able to catalyze the conversion of the formed dioxolane into a dioxane moiety. The work presented here and the knowledge gained of the enzymes involved in dioxane biosynthesis enables their use in the rational design of novel compounds containing C–C bound ribose, dioxolane and dioxane moieties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tässä diplomityössä on tarkasteltu Porvoon öljynjalostamon vetyverkkoa ja pohdittu keinoja, joilla vedyn käyttöä jalostamolla voitaisiin tehostaan sekä polttokaasuverkkoon menevän vedyn määrä pienentää. Tarkastelun lähtökohtana toimii vetytaseen pohjalta laadittu vetypinch-analyysi. Kirjallisuusosassa on esitelty jalostamon vetyverkkoon kuuluvat yksiköt sekä käsitelty lyhyesti niiden toimintaa. Lisäksi on käsitelty vetypinch-analyysin periaate, sekä kuinka todelliset prosessirajoitteet voidaan huomioida sitä toteutettaessa. Kirjallisuusosan lopussa on esitetty miten vetyverkon vaiheittainen optimointi etenee. Työn soveltavassa osassa laadittiin vetyverkon virtauskaavio, jolla saatiin luotua kattava käsitys jalostamon vedynjakelusta. Virtauskaaviosta tehtiin yksinkertaistettu versio, jonka perusteella laadittiin vetytase. Vetytaseen pohjalta suoritettiin vetypinch-analyysi, jonka mukaan jalostamolla tuotettiin tasehetkellä ylimäärin vetyä. Vedyn käytön tehostamiseksi jalostamolla tulee rikkivedyn talteenottoyksikkö 2:n polttokaasuvirta pyrkiä minimoimaan tai hyödyntämään. Lisäksi virtausmittareiden mitoituspisteiden molekyylimassat tulisi muuttaa vastaamaan paremmin nykyistä ajotilannetta, sekä seurata niitä jatkossa säännöllisesti. Myös vetypitoisuutta mittaavien online-analysaattoreiden kalibroinnista tulee huolehtia, ja ottaa riittävästi kenttänäytteitä vetyverkosta. On huomattava, että öljynjalostamon vedyn tuotannon minimointi ei ole aina automaattisesti taloudellisin ratkaisu. Joissain tapauksissa vedyn osapaineen nostaminen vetyä kuluttavan yksikön reaktorissa voi lisätä yksikön tuottavuutta niin paljon, että se kompensoi lisääntyneestä vedyn tuotannosta aiheutuvat kustannukset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation examined skill development in music reading by focusing on the visual processing of music notation in different music-reading tasks. Each of the three experiments of this dissertation addressed one of the three types of music reading: (i) sight-reading, i.e. reading and performing completely unknown music, (ii) rehearsed reading, during which the performer is already familiar with the music being played, and (iii) silent reading with no performance requirements. The use of the eye-tracking methodology allowed the recording of the readers’ eye movements from the time of music reading with extreme precision. Due to the lack of coherence in the smallish amount of prior studies on eye movements in music reading, the dissertation also had a heavy methodological emphasis. The present dissertation thus aimed to promote two major issues: (1) it investigated the eye-movement indicators of skill and skill development in sight-reading, rehearsed reading and silent reading, and (2) developed and tested suitable methods that can be used by future studies on the topic. Experiment I focused on the eye-movement behaviour of adults during their first steps of learning to read music notation. The longitudinal experiment spanned a nine-month long music-training period, during which 49 participants (university students taking part in a compulsory music course) sight-read and performed a series of simple melodies in three measurement sessions. Participants with no musical background were entitled as “novices”, whereas “amateurs” had had musical training prior to the experiment. The main issue of interest was the changes in the novices’ eye movements and performances across the measurements while the amateurs offered a point of reference for the assessment of the novices’ development. The experiment showed that the novices tended to sight-read in a more stepwise fashion than the amateurs, the latter group manifesting more back-and-forth eye movements. The novices’ skill development was reflected by the faster identification of note symbols involved in larger melodic intervals. Across the measurements, the novices also began to show sensitivity to the melodies’ metrical structure, which the amateurs demonstrated from the very beginning. The stimulus melodies consisted of quarter notes, making the effects of meter and larger melodic intervals distinguishable from effects caused by, say, different rhythmic patterns. Experiment II explored the eye movements of 40 experienced musicians (music education students and music performance students) during temporally controlled rehearsed reading. This cross-sectional experiment focused on the eye-movement effects of one-bar-long melodic alterations placed within a familiar melody. The synchronizing of the performance and eye-movement recordings enabled the investigation of the eye-hand span, i.e., the temporal gap between a performed note and the point of gaze. The eye-hand span was typically found to remain around one second. Music performance students demonstrated increased professing efficiency by their shorter average fixation durations as well as in the two examined eye-hand span measures: these participants used larger eye-hand spans more frequently and inspected more of the musical score during the performance of one metrical beat than students of music education. Although all participants produced performances almost indistinguishable in terms of their auditory characteristics, the altered bars indeed affected the reading of the score: the general effects of expertise in terms of the two eye- hand span measures, demonstrated by the music performance students, disappeared in the face of the melodic alterations. Experiment III was a longitudinal experiment designed to examine the differences between adult novice and amateur musicians’ silent reading of music notation, as well as the changes the 49 participants manifested during a nine-month long music course. From a methodological perspective, an opening to research on eye movements in music reading was the inclusion of a verbal protocol in the research design: after viewing the musical image, the readers were asked to describe what they had seen. A two-way categorization for verbal descriptions was developed in order to assess the quality of extracted musical information. More extensive musical background was related to shorter average fixation duration, more linear scanning of the musical image, and more sophisticated verbal descriptions of the music in question. No apparent effects of skill development were observed for the novice music readers alone, but all participants improved their verbal descriptions towards the last measurement. Apart from the background-related differences between groups of participants, combining verbal and eye-movement data in a cluster analysis identified three styles of silent reading. The finding demonstrated individual differences in how the freely defined silent-reading task was approached. This dissertation is among the first presentations of a series of experiments systematically addressing the visual processing of music notation in various types of music-reading tasks and focusing especially on the eye-movement indicators of developing music-reading skill. Overall, the experiments demonstrate that the music-reading processes are affected not only by “top-down” factors, such as musical background, but also by the “bottom-up” effects of specific features of music notation, such as pitch heights, metrical division, rhythmic patterns and unexpected melodic events. From a methodological perspective, the experiments emphasize the importance of systematic stimulus design, temporal control during performance tasks, and the development of complementary methods, for easing the interpretation of the eye-movement data. To conclude, this dissertation suggests that advances in comprehending the cognitive aspects of music reading, the nature of expertise in this musical task, and the development of educational tools can be attained through the systematic application of the eye-tracking methodology also in this specific domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changes in the electroencephalography (EEG) signal have been used to study the effects of anesthetic agents on the brain function. Several commercial EEG based anesthesia depth monitors have been developed to measure the level of the hypnotic component of anesthesia. Specific anesthetic related changes can be seen in the EEG, but still it remains difficult to determine whether the subject is consciousness or not during anesthesia. EEG reactivity to external stimuli may be seen in unconsciousness subjects, in anesthesia or even in coma. Changes in regional cerebral blood flow, which can be measured with positron emission tomography (PET), can be used as a surrogate for changes in neuronal activity. The aim of this study was to investigate the effects of dexmedetomidine, propofol, sevoflurane and xenon on the EEG and the behavior of two commercial anesthesia depth monitors, Bispectral Index (BIS) and Entropy. Slowly escalating drug concentrations were used with dexmedetomidine, propofol and sevoflurane. EEG reactivity at clinically determined similar level of consciousness was studied and the performance of BIS and Entropy in differentiating consciousness form unconsciousness was evaluated. Changes in brain activity during emergence from dexmedetomidine and propofol induced unconsciousness were studied using PET imaging. Additionally, the effects of normobaric hyperoxia, induced during denitrogenation prior to xenon anesthesia induction, on the EEG were studied. Dexmedetomidine and propofol caused increases in the low frequency, high amplitude (delta 0.5-4 Hz and theta 4.1-8 Hz) EEG activity during stepwise increased drug concentrations from the awake state to unconsciousness. With sevoflurane, an increase in delta activity was also seen, and an increase in alpha- slow beta (8.1-15 Hz) band power was seen in both propofol and sevoflurane. EEG reactivity to a verbal command in the unconsciousness state was best retained with propofol, and almost disappeared with sevoflurane. The ability of BIS and Entropy to differentiate consciousness from unconsciousness was poor. At the emergence from dexmedetomidine and propofol induced unconsciousness, activation was detected in deep brain structures, but not within the cortex. In xenon anesthesia, EEG band powers increased in delta, theta and alpha (8-12Hz) frequencies. In steady state xenon anesthesia, BIS and Entropy indices were low and these monitors seemed to work well in xenon anesthesia. Normobaric hyperoxia alone did not cause changes in the EEG. All of these results are based on studies in healthy volunteers and their application to clinical practice should be considered carefully.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most common reason for a low-voltage induction motor breakdown is a bearing failure. Along with the increasing popularity of modern frequency converters, bearing failures have become the most important motor fault type. Conditions in which bearing currents are likely to occur are generated as a side effect of fast du/dt switching transients. Once present, different types of bearing currents can accelerate the mechanical wear of bearings by causing deformation of metal parts in the bearing and degradation of the lubricating oil properties.The bearing current phenomena are well known, and several bearing current measurement and mitigation methods have been proposed. Nevertheless, in order to develop more feasible methods to measure and mitigate bearing currents, better knowledge of the phenomena is required. When mechanical wear is caused by bearing currents, the resulting aging impact has to be monitored and dealt with. Moreover, because of the stepwise aging mechanism, periodically executed condition monitoring measurements have been found ineffective. Thus, there is a need for feasible bearing current measurement methods that can be applied in parallel with the normal operation of series production drive systems. In order to reach the objectives of feasibility and applicability, nonintrusive measurement methods are preferred. In this doctoral dissertation, the characteristics and conditions of bearings that are related to the occurrence of different kinds of bearing currents are studied. Further, the study introduces some nonintrusive radio-frequency-signal-based approaches to detect and measure parameters that are associated with the accelerated bearing wear caused by bearing currents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, stepwise titration with hydrochloric acid was used to obtain chemical reactivities and dissolution rates of ground limestones and dolostones of varying geological backgrounds (sedimentary, metamorphic or magmatic). Two different ways of conducting the calculations were used: 1) a first order mathematical model was used to calculate extrapolated initial reactivities (and dissolution rates) at pH 4, and 2) a second order mathematical model was used to acquire integrated mean specific chemical reaction constants (and dissolution rates) at pH 5. The calculations of the reactivities and dissolution rates were based on rate of change of pH and particle size distributions of the sample powders obtained by laser diffraction. The initial dissolution rates at pH 4 were repeatedly higher than previously reported literature values, whereas the dissolution rates at pH 5 were consistent with former observations. Reactivities and dissolution rates varied substantially for dolostones, whereas for limestones and calcareous rocks, the variation can be primarily explained by relatively large sample standard deviations. A list of the dolostone samples in a decreasing order of initial reactivity at pH 4 is: 1) metamorphic dolostones with calcite/dolomite ratio higher than about 6% 2) sedimentary dolostones without calcite 3) metamorphic dolostones with calcite/dolomite ratio lower than about 6% The reactivities and dissolution rates were accompanied by a wide range of experimental techniques to characterise the samples, to reveal how different rocks changed during the dissolution process, and to find out which factors had an influence on their chemical reactivities. An emphasis was put on chemical and morphological changes taking place at the surfaces of the particles via X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Supporting chemical information was obtained with X-Ray Fluorescence (XRF) measurements of the samples, and Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) and Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) measurements of the solutions used in the reactivity experiments. Information on mineral (modal) compositions and their occurrence was provided by X-Ray Diffraction (XRD), Energy Dispersive X-ray analysis (EDX) and studying thin sections with a petrographic microscope. BET (Brunauer, Emmet, Teller) surface areas were determined from nitrogen physisorption data. Factors increasing chemical reactivity of dolostones and calcareous rocks were found to be sedimentary origin, higher calcite concentration and smaller quartz concentration. Also, it is assumed that finer grain size and larger BET surface areas increase the reactivity although no certain correlation was found in this thesis. Atomic concentrations did not correlate with the reactivities. Sedimentary dolostones, unlike metamorphic ones, were found to have porous surface structures after dissolution. In addition, conventional (XPS) and synchrotron based (HRXPS) X-ray Photoelectron Spectroscopy were used to study bonding environments on calcite and dolomite surfaces. Both samples are insulators, which is why neutralisation measures such as electron flood gun and a conductive mask were used. Surface core level shifts of 0.7 ± 0.1 eV for Ca 2p spectrum of calcite and 0.75 ± 0.05 eV for Mg 2p and Ca 3s spectra of dolomite were obtained. Some satellite features of Ca 2p, C 1s and O 1s spectra have been suggested to be bulk plasmons. The origin of carbide bonds was suggested to be beam assisted interaction with hydrocarbons found on the surface. The results presented in this thesis are of particular importance for choosing raw materials for wet Flue Gas Desulphurisation (FGD) and construction industry. Wet FGD benefits from high reactivity, whereas construction industry can take advantage of slow reactivity of carbonate rocks often used in the facades of fine buildings. Information on chemical bonding environments may help to create more accurate models for water-rock interactions of carbonates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, the power generation is one of the most significant life aspects for the whole man-kind. Barely one can imagine our life without electricity and thermal energy. Thus, different technologies for producing those types of energy need to be used. Each of those technologies will always have their own advantages and disadvantages. Nevertheless, every technology must satisfy such requirements as efficiency, ecology safety and reliability. In the matter of the power generation with nuclear energy utilization these requirements needs to be highly main-tained, especially since accidents on nuclear power plants may cause very long term deadly consequences. In order to prevent possible disasters related to the accident on a nuclear power plant strong and powerful algorithms were invented in last decades. Such algorithms are able to manage calculations of different physical processes and phenomena of real facilities. How-ever, the results acquired by the computing must be verified with experimental data.