929 resultados para Based structure model
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
Current therapy for pancreatic cancer is multimodal, involving surgery and chemotherapy. However, development of pancreatic cancer therapies requires a thorough evaluation of drug efficacy in vitro before animal testing and subsequent clinical trials. Compared to two-dimensional culture of cell monolayer, three-dimensional (3-D) models more closely mimic native tissues, since the tumor microenvironment established in 3-D models often plays a significant role in cancer progression and cellular responses to the drugs. Accumulating evidence has highlighted the benefits of 3-D in vitro models of various cancers. In the present study, we have developed a spheroid-based, 3-D culture of pancreatic cancer cell lines MIAPaCa-2 and PANC-1 for pancreatic drug testing, using the acid phosphatase assay. Drug efficacy testing showed that spheroids had much higher drug resistance than monolayers. This model, which is characteristically reproducible and easy and offers rapid handling, is the preferred choice for filling the gap between monolayer cell cultures and in vivo models in the process of drug development and testing for pancreatic cancer.
Resumo:
Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
In this paper, we review the advances of monocular model-based tracking for last ten years period until 2014. In 2005, Lepetit, et. al, [19] reviewed the status of monocular model based rigid body tracking. Since then, direct 3D tracking has become quite popular research area, but monocular model-based tracking should still not be forgotten. We mainly focus on tracking, which could be applied to aug- mented reality, but also some other applications are covered. Given the wide subject area this paper tries to give a broad view on the research that has been conducted, giving the reader an introduction to the different disciplines that are tightly related to model-based tracking. The work has been conducted by searching through well known academic search databases in a systematic manner, and by selecting certain publications for closer examination. We analyze the results by dividing the found papers into different categories by their way of implementation. The issues which have not yet been solved are discussed. We also discuss on emerging model-based methods such as fusing different types of features and region-based pose estimation which could show the way for future research in this subject.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
It has long been known that amino acids are the building blocks for proteins and govern their folding into specific three-dimensional structures. However, the details of this process are still unknown and represent one of the main problems in structural bioinformatics, which is a highly active research area with the focus on the prediction of three-dimensional structure and its relationship to protein function. The protein structure prediction procedure encompasses several different steps from searches and analyses of sequences and structures, through sequence alignment to the creation of the structural model. Careful evaluation and analysis ultimately results in a hypothetical structure, which can be used to study biological phenomena in, for example, research at the molecular level, biotechnology and especially in drug discovery and development. In this thesis, the structures of five proteins were modeled with templatebased methods, which use proteins with known structures (templates) to model related or structurally similar proteins. The resulting models were an important asset for the interpretation and explanation of biological phenomena, such as amino acids and interaction networks that are essential for the function and/or ligand specificity of the studied proteins. The five proteins represent different case studies with their own challenges like varying template availability, which resulted in a different structure prediction process. This thesis presents the techniques and considerations, which should be taken into account in the modeling procedure to overcome limitations and produce a hypothetical and reliable three-dimensional structure. As each project shows, the reliability is highly dependent on the extensive incorporation of experimental data or known literature and, although experimental verification of in silico results is always desirable to increase the reliability, the presented projects show that also the experimental studies can greatly benefit from structural models. With the help of in silico studies, the experiments can be targeted and precisely designed, thereby saving both money and time. As the programs used in structural bioinformatics are constantly improved and the range of templates increases through structural genomics efforts, the mutual benefits between in silico and experimental studies become even more prominent. Hence, reliable models for protein three-dimensional structures achieved through careful planning and thoughtful executions are, and will continue to be, valuable and indispensable sources for structural information to be combined with functional data.
Resumo:
The purpose of this thesis is to focus on credit risk estimation. Different credit risk estimation methods and characteristics of credit risk are discussed. The study is twofold, including an interview of a credit risk specialist and a quantitative section. Quantitative section applies the KMV model to estimate credit risk of 12 sample companies from three different industries: automobile, banking and financial sector and technology. Timeframe of the estimation is one year. On the basis of the KMV model and the interview, implications for analysis of credit risk are discussed. The KMV model yields consistent results with the existing credit ratings. However, banking and financial sector requires calibration of the model due to high leverage of the industry. Credit risk is considerably driven by leverage, value and volatility of assets. Credit risk models produce useful information on credit worthiness of a business. Yet, quantitative models often require qualitative support in the decision-making situation.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
In this work an agent based model (ABM) was proposed using the main idea from the Jabłonska-Capasso-Morale (JCM) model and maximized greediness concept. Using a multi-agents simulator, the power of the ABM was assessed by using the historical prices of silver metal dating from the 01.03.2000 to 01.03.2013. The model results, analysed in two different situations, with and without maximized greediness, have proven that the ABM is capable of explaining the silver price dynamics even in utmost events. The ABM without maximal greediness explained the prices with more irrationalities whereas the ABM with maximal greediness tracked the price movements with more rational decisions. In the comparison test, the model without maximal greediness stood as the best to capture the silver market dynamics. Therefore, the proposed ABM confirms the suggested reasons for financial crises or markets failure. It reveals that an economic or financial collapse may be stimulated by irrational and rational decisions, yet irrationalities may dominate the market.
Resumo:
In this work, the magnetic field penetration depth for high-Tc cuprate superconductors is calculated using a recent Interlayer Pair Tunneling (ILPT) model proposed by Chakravarty, Sudb0, Anderson, and Strong [1] to explain high temperature superconductivity. This model involves a "hopping" of Cooper pairs between layers of the unit cell which acts to amplify the pairing mechanism within the planes themselves. Recent work has shown that this model can account reasonably well for the isotope effect and the dependence of Tc on nonmagnetic in-plane impurities [2] , as well as the Knight shift curves [3] and the presence of a magnetic peak in the neutron scattering intensity [4]. In the latter case, Yin et al. emphasize that the pair tunneling must be the dominant pairing mechanism in the high-Tc cuprates in order to capture the features found in experiments. The goal of this work is to determine whether or not the ILPT model can account for the experimental observations of the magnetic field penetration depth in YBa2Cu307_a7. Calculations are performed in the weak and strong coupling limits, and the efi"ects of both small and large strengths of interlayer pair tunneling are investigated. Furthermore, as a follow up to the penetration depth calculations, both the neutron scattering intensity and the Knight shift are calculated within the ILPT formalism. The aim is to determine if the ILPT model can yield results consistent with experiments performed for these properties. The results for all three thermodynamic properties considered are not consistent with the notion that the interlayer pair tunneling must be the dominate pairing mechanism in these high-Tc cuprate superconductors. Instead, it is found that reasonable agreement with experiments is obtained for small strengths of pair tunneling, and that large pair tunneling yields results which do not resemble those of the experiments.
Resumo:
To study emerging diseases, I employed a model pathogen-host system involving infections of insect larvae with the opportunistic fungus Aspergillus flavus, providing insight into three mechanisms ofpathogen evolution namely de novo mutation, genome decay, and virulence factoracquisition In Chapter 2 as a foundational experiment, A. flavus was serially propagated through insects to study the evolution of an opportunistic pathogen during repeated exposure to a single host. While A. flavus displayed de novo phenotypic alterations, namely decreased saprobic capacity, analysis of genotypic variation in Chapter 3 signified a host-imposed bottleneck on the pathogen population, emphasizing the host's role in shaping pathogen population structure. Described in Chapter 4, the serial passage scheme enabled the isolation of an A. flavus cysteine/methionine auxotroph with characteristics reminiscent of an obligate insect pathogen, suggesting that lost biosynthetic capacity may restrict host range based on nutrient availability and provide selection pressure for further evolution. As outlined in Chapter 6, cysteine/methionine auxotrophy had the pleiotrophic effect of increasing virulence factor production, affording the slow-growing auxotroph with a modified pathogenic strategy such that virulence was not reduced. Moreover in Chapter 7, transformation with a virulence factor from a facultative insect pathogen failed to increase virulence, demonstrating the necessity of an appropriate genetic background for virulence factor acquisition to instigate pathogen evolution.
Resumo:
It is well accepted that structural studies with model membranes are of considerable value in understanding the structure of biological membranes. Many studies with models of pure phospholipids have been done; but the effects of divalent cations and protein on these models would make these studies more applicable to intact membrane. The present study, performed with above view, is a structural analysis of divalent io~cardio1ipin complexes using the technique of x-ray diffraction. Cardiolipin, precipitated from dilute solution by divalent ionscalcium, magnesium and barium, contains little water and the structure formed is similar to the structure of pure cardiolipin with low water content. The calcium-cardiolipin complex forms a pure hexagonal type II phase that exists from 40 to 400 C. The molar ratio of calcium and cardiolipin in the complex is 1 : 1. Cardiolipin, precipitated with magnesium and barium forms two co-existing phases, lamellar and hexagonal, the relative quantity of the two phases being dependent on temperature. The hexagonal phase type II consisting of water filled channels formed by adding calcium to cardiolipin may have a remarkable permeability property in intact membrane. Pure cardiolipin and insulin at pH 3.0 and 4.0 precipitate but form no organised structure. Lecithin/cardiolipin and insulin precipitated at pH 3.0 give a pure lamellar phase. As the lecithin/cardiolipin molar ratio changes from 93/7 to SO/50, (a) the repeat distance of the lamellar changes from 72.8 X to 68.2 A; (b) the amount of protein bound increases in such a way that cardiolipin/insulin molar ratio in the complex reaches a maximum constant value at lecithin/cardiolipin molar ratio 70/30. A structural model based on these data shows that the molecular arrangement of lipid and protein is a lipid bilayer coated with protein molecules. The lipid-protein interaction is chiefly electrostatic and little, if any, hydrophobic bonding occurs in this particular system. So, the proposed model is essentially the same as Davson-Daniellifs model of biological membrane.