47 resultados para Quantitative stability
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
The aim of the present set of studies was to explore primary school children’s Spontaneous Focusing On quantitative Relations (SFOR) and its role in the development of rational number conceptual knowledge. The specific goals were to determine if it was possible to identify a spontaneous quantitative focusing tendency that indexes children’s tendency to recognize and utilize quantitative relations in non-explicitly mathematical situations and to determine if this tendency has an impact on the development of rational number conceptual knowledge in late primary school. To this end, we report on six original empirical studies that measure SFOR in children ages five to thirteen years and the development of rational number conceptual knowledge in ten- to thirteen-year-olds. SFOR measures were developed to determine if there are substantial differences in SFOR that are not explained by the ability to use quantitative relations. A measure of children’s conceptual knowledge of the magnitude representations of rational numbers and the density of rational numbers is utilized to capture the process of conceptual change with rational numbers in late primary school students. Finally, SFOR tendency was examined in relation to the development of rational number conceptual knowledge in these students. Study I concerned the first attempts to measure individual differences in children’s spontaneous recognition and use of quantitative relations in 86 Finnish children from the ages of five to seven years. Results revealed that there were substantial inter-individual differences in the spontaneous recognition and use of quantitative relations in these tasks. This was particularly true for the oldest group of participants, who were in grade one (roughly seven years old). However, the study did not control for ability to solve the tasks using quantitative relations, so it was not clear if these differences were due to ability or SFOR. Study II more deeply investigated the nature of the two tasks reported in Study I, through the use of a stimulated-recall procedure examining children’s verbalizations of how they interpreted the tasks. Results reveal that participants were able to verbalize reasoning about their quantitative relational responses, but not their responses based on exact number. Furthermore, participants’ non-mathematical responses revealed a variety of other aspects, beyond quantitative relations and exact number, which participants focused on in completing the tasks. These results suggest that exact number may be more easily perceived than quantitative relations. As well, these tasks were revealed to contain both mathematical and non-mathematical aspects which were interpreted by the participants as relevant. Study III investigated individual differences in SFOR 84 children, ages five to nine, from the US and is the first to report on the connection between SFOR and other mathematical abilities. The cross-sectional data revealed that there were individual differences in SFOR. Importantly, these differences were not entirely explained by the ability to solve the tasks using quantitative relations, suggesting that SFOR is partially independent from the ability to use quantitative relations. In other words, the lack of use of quantitative relations on the SFOR tasks was not solely due to participants being unable to solve the tasks using quantitative relations, but due to a lack of the spontaneous attention to the quantitative relations in the tasks. Furthermore, SFOR tendency was found to be related to arithmetic fluency among these participants. This is the first evidence to suggest that SFOR may be a partially distinct aspect of children’s existing mathematical competences. Study IV presented a follow-up study of the first graders who participated in Studies I and II, examining SFOR tendency as a predictor of their conceptual knowledge of fraction magnitudes in fourth grade. Results revealed that first graders’ SFOR tendency was a unique predictor of fraction conceptual knowledge in fourth grade, even after controlling for general mathematical skills. These results are the first to suggest that SFOR tendency may play a role in the development of rational number conceptual knowledge. Study V presents a longitudinal study of the development of 263 Finnish students’ rational number conceptual knowledge over a one year period. During this time participants completed a measure of conceptual knowledge of the magnitude representations and the density of rational numbers at three time points. First, a Latent Profile Analysis indicated that a four-class model, differentiating between those participants with high magnitude comparison and density knowledge, was the most appropriate. A Latent Transition Analysis reveal that few students display sustained conceptual change with density concepts, though conceptual change with magnitude representations is present in this group. Overall, this study indicated that there were severe deficiencies in conceptual knowledge of rational numbers, especially concepts of density. The longitudinal Study VI presented a synthesis of the previous studies in order to specifically detail the role of SFOR tendency in the development of rational number conceptual knowledge. Thus, the same participants from Study V completed a measure of SFOR, along with the rational number test, including a fourth time point. Results reveal that SFOR tendency was a predictor of rational number conceptual knowledge after two school years, even after taking into consideration prior rational number knowledge (through the use of residualized SFOR scores), arithmetic fluency, and non-verbal intelligence. Furthermore, those participants with higher-than-expected SFOR scores improved significantly more on magnitude representation and density concepts over the four time points. These results indicate that SFOR tendency is a strong predictor of rational number conceptual development in late primary school children. The results of the six studies reveal that within children’s existing mathematical competences there can be identified a spontaneous quantitative focusing tendency named spontaneous focusing on quantitative relations. Furthermore, this tendency is found to play a role in the development of rational number conceptual knowledge in primary school children. Results suggest that conceptual change with the magnitude representations and density of rational numbers is rare among this group of students. However, those children who are more likely to notice and use quantitative relations in situations that are not explicitly mathematical seem to have an advantage in the development of rational number conceptual knowledge. It may be that these students gain quantitative more and qualitatively better self-initiated deliberate practice with quantitative relations in everyday situations due to an increased SFOR tendency. This suggests that it may be important to promote this type of mathematical activity in teaching rational numbers. Furthermore, these results suggest that there may be a series of spontaneous quantitative focusing tendencies that have an impact on mathematical development throughout the learning trajectory.
Resumo:
Prostate cancer is a heterogeneous disease affecting an increasing number of men all over the world, but particularly in the countries with the Western lifestyle. The best biomarker assay currently available for the diagnosis of the disease, the measurement of prostate specific antigen (PSA) levels from blood, lacks specificity, and even when combined with invasive tests such as digital rectal exam and prostate tissue biopsies, these methods can both miss cancers, and lead to overdiagnosis and subsequent overtreatment of cancers. Moreover, they cannot provide an accurate prognosis for the disease. Due to the high prevalence of indolent prostate cancers, the majority of men affected by prostate cancer would be able to live without any medical intervention. Their latent prostate tumors would not cause any clinical symptoms during their lifetime, but few are willing to take the risk, as currently there are no methods or biomarkers to reliably differentiate the indolent cancers from the aggressive, lethal cases that really are in need of immediate medical treatment. This doctoral work concentrated on validating 12 novel candidate genes for use as biomarkers for prostate cancer by measuring their mRNA expression levels in prostate tissue and peripheral blood of men with cancer as well as unaffected individuals. The panel of genes included the most prominent markers in the current literature: PCA3 and the fusion gene TMPRSS2-ERG, in addition to BMP-6, FGF-8b, MSMB, PSCA, SPINK1, and TRPM8; and the kallikrein-related peptidase genes 2, 3, 4, and 15. Truly quantitative reverse-transcription PCR assays were developed for each of the genes for the purpose, time-resolved fluorometry was applied in the real-time detection of the amplification products, and the gene expression data were normalized by using artificial internal RNA standards. Cancer-related, statistically significant differences in gene transcript levels were found for TMPRSS2-ERG, PCA3, and in a more modest scale, for KLK15, PSCA, and SPINK1. PCA3 RNA was found in the blood of men with metastatic prostate cancer, but not in localized cases of cancer, suggesting limitations for using this method for early cancer detection in blood. TMPRSS2-ERG mRNA transcripts were found more frequently in cancerous than in benign prostate tissues, but they were present also in 51% of the histologically benign prostate tissues of men with prostate cancer, while being absent in specimens from men without any signs of prostate cancer. PCA3 was shown to be 5.8 times overexpressed in cancerous tissue, but similarly to the fusion gene mRNA, its levels were upregulated also in the histologically benign regions of the tissue if the corresponding prostate was harboring carcinoma. These results indicate a possibility to utilize these molecular assays to assist in prostate cancer risk evaluation especially in men with initially histologically negative biopsies.
Resumo:
Systemic iron overload (IO) is considered a principal determinant in the clinical outcome of different forms of IO and in allogeneic hematopoietic stem cell transplantation (alloSCT). However, indirect markers for iron do not provide exact quantification of iron burden, and the evidence of iron-induced adverse effects in hematological diseases has not been established. Hepatic iron concentration (HIC) has been found to represent systemic IO, which can be quantified safely with magnetic resonance imaging (MRI), based on enhanced transverse relaxation. The iron measurement methods by MRI are evolving. The aims of this study were to implement and optimise the methodology of non-invasive iron measurement with MRI to assess the degree and the role of IO in the patients. An MRI-based HIC method (M-HIC) and a transverse relaxation rate (R2*) from M-HIC images were validated. Thereafter, a transverse relaxation rate (R2) from spin-echo imaging was calibrated for IO assessment. Two analysis methods, visual grading and rSI, for a rapid IO grading from in-phase and out-of-phase images were introduced. Additionally, clinical iron indicators were evaluated. The degree of hepatic and cardiac iron in our study patients and IO as a prognostic factor in patients undergoing alloSCT were explored. In vivo and in vitro validations indicated that M-HIC and R2* are both accurate in the quantification of liver iron. R2 was a reliable method for HIC quantification and covered a wider HIC range than M-HIC and R2*. The grading of IO was able to be performed rapidly with the visual grading and rSI methods. Transfusion load was more accurate than plasma ferritin in predicting transfusional IO. In patients with hematological disorders, the prevalence of hepatic IO was frequent, opposite to cardiac IO. Patients with myelodysplastic syndrome were found to be the most susceptible to IO. Pre-transplant IO predicted severe infections during the early post-transplant period, in contrast to the reduced risk of graft-versus-host disease. Iron-induced, poor transplantation results are most likely to be mediated by severe infections.
Resumo:
In recent years, there have been studies that show a correlation between the hyperactivity of children and use of artificial food additives, including colorants. This has, in part, led to preference of natural products over products with artificial additives. Consumers have also become more aware of health issues. Natural food colorants have many bioactive functions, mainly vitamin A activity of carotenoids and antioxidativity, and therefore they could be more easily accepted by the consumers. However, natural colorant compounds are usually unstable, which restricts their usage. Microencapsulation could be one way to enhance the stability of natural colorant compounds and thus enable better usage for them as food colorants. Microencapsulation is a term used for processes in which the active material is totally enveloped in a coating or capsule, and thus it is separated and protected from the surrounding environment. In addition to protection by the capsule, microencapsulation can also be used to modify solubility and other properties of the encapsulated material, for example, to incorporate fat-soluble compounds into aqueous matrices. The aim of this thesis work was to study the stability of two natural pigments, lutein (carotenoid) and betanin (betalain), and to determine possible ways to enhance their stability with different microencapsulation techniques. Another aim was the extraction of pigments without the use of organic solvents and the development of previously used extraction methods. Stability of pigments in microencapsulated pigment preparations and model foods containing these were studied by measuring the pigment content after storage in different conditions. Preliminary studies on the bioavailability of microencapsulated pigments and sensory evaluation for consumer acceptance of model foods containing microencapsulated pigments were also carried out. Enzyme-assisted oil extraction was used to extract lutein from marigold (Tagetes erecta) flower without organic solvents, and the yield was comparable to solvent extraction of lutein from the same flowers. The effects of temperature, extraction time, and beet:water ratio on extraction efficiency of betanin from red beet (Beta vulgaris) were studied and the optimal conditions for maximum yield and maximum betanin concentration were determined. In both cases, extraction at 40 °C was better than extraction at 80 °C and the extraction for five minutes was as efficient as 15 or 30 minutes. For maximum betanin yield, the beet:water ratio of 1:2 was better, with possibly repeated extraction, but for maximum betanin concentration, a ratio of 1:1 was better. Lutein was incorporated into oil-in-water (o/w) emulsions with a polar oil fraction from oat (Avena sativa) as an emulsifier and mixtures of guar gum and xanthan gum or locust bean gum and xanthan gum as stabilizers to retard creaming. The stability of lutein in these emulsions was quite good, with 77 to 91 percent of lutein being left after storage in the dark at 20 to 22°C for 10 weeks whereas in spray dried emulsions the retention of lutein was 67 to 75 percent. The retention of lutein in oil was also good at 85 percent. Betanin was incorporated into the inner w1 water phase of a water1-in-oil-inwater2 (w1/o/w2) double emulsion with primary w1/o emulsion droplet size of 0.34 μm and secondary w1/o/w2 emulsion droplet size of 5.5 μm and encapsulation efficiency of betanin of 89 percent. In vitro intestinal lipid digestion was performed on the double emulsion, and during the first two hours, coalescence of the inner water phase droplets was observed, and the sizes of the double emulsion droplets increased quickly because of aggregation. This period also corresponded to gradual release of betanin, with a final release of 35 percent. The double emulsion structure was retained throughout the three-hour experiment. Betanin was also spray dried and incorporated into model juices with different pH and dry matter content. Model juices were stored in the dark at -20, 4, 20–24 or 60 °C (accelerated test) for several months. Betanin degraded quite rapidly in all of the samples and higher temperature and a lower pH accelerated degradation. Stability of betanin was much better in the spray dried powder, with practically no degradation during six months of storage in the dark at 20 to 24 °C and good stability also for six months in the dark at 60 °C with 60 percent retention. Consumer acceptance of model juices colored with spray dried betanin was compared with similar model juices colored with anthocyanins or beet extract. Consumers preferred beet extract and anthocyanin colored model juices over juices colored with spray dried betanin. However, spray dried betanin did not impart any off-odors or off-flavors into the model juices contrary to the beet extract. In conclusion, this thesis describes novel solvent-free extraction and encapsulation processes for lutein and betanin from plant sources. Lutein showed good stability in oil and in o/w emulsions, but slightly inferior in spray dried emulsions. In vitro intestinal lipid digestion showed a good stability of w1/o/w2 double emulsion and quite high retention of betanin during digestion. Consumer acceptance of model juices colored with spray dried betanin was not as good as model juices colored with anthocyanins, but addition of betanin to real berry juice could produce better results with mixture of added betanin and natural berry anthocyanins could produce a more acceptable color. Overall, further studies are needed to obtain natural colorants with good stability for the use in food products.
Resumo:
Time series of hourly electricity spot prices have peculiar properties. Electricity is by its nature difficult to store and has to be available on demand. There are many reasons for wanting to understand correlations in price movements, e.g. risk management purposes. The entire analysis carried out in this thesis has been applied to the New Zealand nodal electricity prices: offer prices (from 29 May 2002 to 31 March 2009) and final prices (from 1 January 1999 to 31 March 2009). In this paper, such natural factors as location of the node and generation type in the node that effects the correlation between nodal prices have been reviewed. It was noticed that the geographical factor affects the correlation between nodes more than others. Therefore, the visualisation of correlated nodes was done. However, for the offer prices the clear separation of correlated and not correlated nodes was not obtained. Finally, it was concluded that location factor most strongly affects correlation of electricity nodal prices; problems in visualisation probably associated with power losses when the power is transmitted over long distance.
Resumo:
There is currently little empirical knowledge regarding the construction of a musician’s identity and social class. With a theoretical framework based on Bourdieu’s (1984) distinction theory, Bronfenbrenner’s (1979) theory of ecological systems, and the identity theories of Erikson (1950; 1968) and Marcia (1966), a survey called the Musician’s Social Background and Identity Questionnaire (MSBIQ) is developed to test three research hypotheses related to the construction of a musician’s identity, social class and ecological systems of development. The MSBIQ is administered to the music students at Sibelius Academy of the University of Arts Helsinki and Helsinki Metropolia University of Applied Sciences, representing the ’highbrow’ and the ’middlebrow’ samples in the field of music education in Finland. Acquired responses (N = 253) are analyzed and compared with quantitative methods including Pearson’s chi-square test, factor analysis and an adjusted analysis of variance (ANOVA). The study revealed that (1) the music students at Sibelius Academy and Metropolia construct their subjective musician’s identity differently, but (2) social class does not affect this identity construction process significantly. In turn, (3) the ecological systems of development, especially the individual’s residential location, do significantly affect the construction of a musician’s identity, as well as the age at which one starts to play one’s first musical instrument. Furthermore, a novel finding related to the structure of a musician’s identity was the tripartite model of musical identity consisting of the three dimensions of a musician’s identity: (I) ’the subjective dimension of a musician’s identity’, (II) ’the occupational dimension of a musician’s identity’ and, (III) ’the conservative-liberal dimension of a musician’s identity’. According to this finding, a musician’s identity is not a uniform, coherent entity, but a structure consisting of different elements continuously working in parallel within different dimensions. The results and limitations related to the study are discussed, as well as the objectives related to future studies using the MSBIQ to research the identity construction and social backgrounds of a musician or other performing artists.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Since cellulose is a linear macromolecule it can be used as a material for regenerated cellulose fiber products e.g. in textile fibers or film manufacturing. Cellulose is not thermoformable, thus the manufacturing of these regenerated fibers is mainly possible through dissolution processes preceding the regeneration process. However, the dissolution of cellulose in common solvents is hindered due to inter- and intra-molecular hydrogen bonds in the cellulose chains, and relatively high crystallinity. Interestingly at subzero temperatures relatively dilute sodium hydroxide solutions can be used to dissolve cellulose to a certain extent. The objective of this work was to investigate the possible factors that govern the solubility of cellulose in aqueous NaOH and the solution stability. Cellulose-NaOH solutions have the tendency to form a gel over time and at elevated temperature, which creates challenges for further processing. The main target of this work was to achieve high solubility of cellulose in aqueous NaOH without excessively compromising the solution stability. In the literature survey an overview of the cellulose dissolution is given and possible factors contributing to the solubility and solution properties of cellulose in aqueous NaOH are reviewed. Furthermore, the concept of solution rheology is discussed. In the experimental part the focus was on the characterization of the used materials and properties of the prepared solutions mainly concentrating on cellulose solubility and solution stability.
Resumo:
The objective of this study was to develop laboratory test methods for characterizing the effects of changed moisture content on paperboard trays produced by press-forming process. Influence of moisture on the properties of unconverted paperboard such as bending stiffness, bursting strength, and curling was studied. Paperboard and tray samples were tested after storing in different relative humidity conditions (35, 50, 65, 80 and 95% RH). The effect of PE and PET extrusion coatings on these properties was also studied. It was found that increase in moisture content of paperboard decreases bending and bursting strength, dimensional stability and stiffness of paperboard trays. Such physical and mechanical properties as bending stiffness and curling of paperboard seem to define the stiffness of ready-made trays and their dimensional stability. Paperboards and trays with extruded PE and PET one sided coatings demonstrated higher strength properties but at the same time had lower dimensional stability comparing to uncoated paperboards. Samples with smaller polymer coat weight had better dimensional stability than respective samples with higher coat weight. It was also found that preconditioning of paperboard in lower humidity environment before press-forming could improve dimensional stability and stiffness of ready-made tray.
Resumo:
The purpose of this paper is to examine the stability and predictive abilities of the beta coefficients of individual equities in the Finnish stock market. As beta is widely used in several areas of finance, including risk management, asset pricing and performance evaluation among others, it is important to understand its characteristics and find out whether its estimates can be trusted and utilized.
Resumo:
THE WAY TO ORGANIZATIONAL LONGEVITY – Balancing stability and change in Shinise firms The overall purpose of this dissertation is to investigate the secret of longevity in Shinise firms. On the basic assumption that organizational longevity is about balancing stability and change, the theoretical perspectives incorporate routine practice, organizational culture, and organizational identity. These theories explain stability and change in an organization separately and in combination. Qualitative inductive theory building was used in the study. Overall, the empirical data comprised 75 in-depth and semi-structured interviews, 137 archival materials, and observations made over 17 weeks. According to the empirical findings, longevity in Shinise firms is attributable to the internal mechanisms (Shinise tenacity, stability in motion, and emergent change) to secure a balance between stability and change, the continuing stability of the socio-cultural environment in the local community, and active interaction between organizational and local cultures. The contribution of the study to the literature on organizational longevity and the alternative theoretical approaches is first, in theorizing the mechanisms of Shinise tenacity and cross-level cultural dynamism, and second, in pointing out the critical role of: the way firms set their ultimate goal, the dynamism in culture, and the effect of history of the firm to the current business in securing longevity. KEY WORDS Change; Culture; Organizational identity; Organizational longevity; Routines; Shinise firms; Stability; Qualitative research
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.