969 resultados para Consistent term structure models
Resumo:
It has long been known that amino acids are the building blocks for proteins and govern their folding into specific three-dimensional structures. However, the details of this process are still unknown and represent one of the main problems in structural bioinformatics, which is a highly active research area with the focus on the prediction of three-dimensional structure and its relationship to protein function. The protein structure prediction procedure encompasses several different steps from searches and analyses of sequences and structures, through sequence alignment to the creation of the structural model. Careful evaluation and analysis ultimately results in a hypothetical structure, which can be used to study biological phenomena in, for example, research at the molecular level, biotechnology and especially in drug discovery and development. In this thesis, the structures of five proteins were modeled with templatebased methods, which use proteins with known structures (templates) to model related or structurally similar proteins. The resulting models were an important asset for the interpretation and explanation of biological phenomena, such as amino acids and interaction networks that are essential for the function and/or ligand specificity of the studied proteins. The five proteins represent different case studies with their own challenges like varying template availability, which resulted in a different structure prediction process. This thesis presents the techniques and considerations, which should be taken into account in the modeling procedure to overcome limitations and produce a hypothetical and reliable three-dimensional structure. As each project shows, the reliability is highly dependent on the extensive incorporation of experimental data or known literature and, although experimental verification of in silico results is always desirable to increase the reliability, the presented projects show that also the experimental studies can greatly benefit from structural models. With the help of in silico studies, the experiments can be targeted and precisely designed, thereby saving both money and time. As the programs used in structural bioinformatics are constantly improved and the range of templates increases through structural genomics efforts, the mutual benefits between in silico and experimental studies become even more prominent. Hence, reliable models for protein three-dimensional structures achieved through careful planning and thoughtful executions are, and will continue to be, valuable and indispensable sources for structural information to be combined with functional data.
Resumo:
This Master’s Thesis analyses the effectiveness of different hedging models on BRICS (Brazil, Russia, India, China, and South Africa) countries. Hedging performance is examined by comparing two different dynamic hedging models to conventional OLS regression based model. The dynamic hedging models being employed are Constant Conditional Correlation (CCC) GARCH(1,1) and Dynamic Conditional Correlation (DCC) GARCH(1,1) with Student’s t-distribution. In order to capture the period of both Great Moderation and the latest financial crisis, the sample period extends from 2003 to 2014. To determine whether dynamic models outperform the conventional one, the reduction of portfolio variance for in-sample data with contemporaneous hedge ratios is first determined and then the holding period of the portfolios is extended to one and two days. In addition, the accuracy of hedge ratio forecasts is examined on the basis of out-of-sample variance reduction. The results are mixed and suggest that dynamic hedging models may not provide enough benefits to justify harder estimation and daily portfolio adjustment. In this sense, the results are consistent with the existing literature.
Resumo:
The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.
Resumo:
Liberalization of electricity markets has resulted in a competed Nordic electricity market, in which electricity retailers play a key role as electricity suppliers, market intermediaries, and service providers. Although these roles may remain unchanged in the near future, the retailers’ operation may change fundamentally as a result of the emerging smart grid environment. Especially the increasing amount of distributed energy resources (DER), and improving opportunities for their control, are reshaping the operating environment of the retailers. This requires that the retailers’ operation models are developed to match the operating environment, in which the active use of DER plays a major role. Electricity retailers have a clientele, and they operate actively in the electricity markets, which makes them a natural market party to offer new services for end-users aiming at an efficient and market-based use of DER. From the retailer’s point of view, the active use of DER can provide means to adapt the operation to meet the challenges posed by the smart grid environment, and to pursue the ultimate objective of the retailer, which is to maximize the profit of operation. This doctoral dissertation introduces a methodology for the comprehensive use of DER in an electricity retailer’s short-term profit optimization that covers operation in a variety of marketplaces including day-ahead, intra-day, and reserve markets. The analysis results provide data of the key profit-making opportunities and the risks associated with different types of DER use. Therefore, the methodology may serve as an efficient tool for an experienced operator in the planning of the optimal market-based DER use. The key contributions of this doctoral dissertation lie in the analysis and development of the model that allows the retailer to benefit from profit-making opportunities brought by the use of DER in different marketplaces, but also to manage the major risks involved in the active use of DER. In addition, the dissertation introduces an analysis of the economic potential of DER control actions in different marketplaces including the day-ahead Elspot market, balancing power market, and the hourly market of Frequency Containment Reserve for Disturbances (FCR-D).
Resumo:
Intermediate filaments are part of the cytoskeleton and nucleoskeleton; they provide cells with structure and have important roles in cell signalling. The IFs are a large protein family with more than 70 members; each tightly regulated and expressed in a cell type-specific manner. Although the IFs have been known and studied for decades, our knowledge about their specific functions is still limited, despite the fact that mutations in IF genes cause numerous severe human diseases. In this work, three IF proteins are examined more closely; the nuclear lamin A/C and the cytoplasmic nestin and vimentin. In particular the regulation of lamin A/C dynamics, the role of nestin in muscle and body homeostasis as well as the functions and evolutionary aspects of vimentin are investigated. Together this data highlights some less well understood functions of these IFs. We used mass-spectrometry to identify inter-phase specific phosphorylation sites on lamin A. With the use of genetically engineered lamin A protein in combination with high resolution microscopy and biochemical methods we discovered novel roles for this phosphorylation in regulation of lamin dynamics. More specifically, our data suggests that the phosphorylation of certain amino acids in lamin A determines the localization and dynamics of the protein. In addition, we present results demonstrating that lamin A regulates Cdk5-activity. In the second study we use mice lacking nestin to gain more knowledge of this seldom studied protein. Our results show that nestin is essential for muscle regeneration; mice lacking nestin recover more slowly from muscle injury and show signs of spontaneous muscle regeneration, indicating that their muscles are more sensitive to stresses and injury. The absence of nestin also leads to decreased over-all muscle mass and slower body growth. Furthermore, nestin has a role in controlling testicle homeostasis as nestin-/- male mice show a greater variation in testicle size. The common fruit fly Drosophila melanogaster lacks cytoplasmic IFs as most insects do. By creating a fly that expresses human vimentin we establish a new research platform for vimentin studies, as well as provide a new tool for the studies of IF evolution.
Resumo:
Keynes and the concept of capital: some epistemological observations in regard to the Sraffian premises of the General Theory. This article aims to examine the conception of the nature of capital used by Keynes in the General Theory, to show to what extent this concept is similar to Sraffa's conception, and to highlight the implications related to this concept, in terms of structural instability. So I will study the mechanisms that explain the investment decision in an environment with strong uncertainty, the modalities of aggregation of different generations of capital and the instability of equilibrium. The convergence between the keynesian and the Sraffian approaches comes from this common conception of capital. Finally, i will examine the implications in regard to the structure of the aggregate models.
Resumo:
Increased rotational speed brings many advantages to an electric motor. One of the benefits is that when the desired power is generated at increased rotational speed, the torque demanded from the rotor decreases linearly, and as a consequence, a motor of smaller size can be used. Using a rotor with high rotational speed in a system with mechanical bearings can, however, create undesirable vibrations, and therefore active magnetic bearings (AMBs) are often considered a good option for the main bearings, as the rotor then has no mechanical contact with other parts of the system but levitates on the magnetic forces. On the other hand, such systems can experience overloading or a sudden shutdown of the electrical system, whereupon the magnetic field becomes extinct, and as a result of rotor delevitation, mechanical contact occurs. To manage such nonstandard operations, AMB-systems require mechanical touchdown bearings with an oversized bore diameter. The need for touchdown bearings seems to be one of the barriers preventing greater adoption of AMB technology, because in the event of an uncontrolled touchdown, failure may occur, for example, in the bearing’s cage or balls, or in the rotor. This dissertation consists of two parts: First, touchdown bearing misalignment in the contact event is studied. It is found that misalignment increases the likelihood of a potentially damaging whirling motion of the rotor. A model for analysis of the stresses occurring in the rotor is proposed. In the studies of misalignment and stresses, a flexible rotor using a finite element approach is applied. Simplified models of cageless and caged bearings are used for the description of touchdown bearings. The results indicate that an increase in misalignment can have a direct influence on the bending and shear stresses occurring in the rotor during the contact event. Thus, it was concluded that analysis of stresses arising in the contact event is essential to guarantee appropriate system dimensioning for possible contact events with misaligned touchdown bearings. One of the conclusions drawn from the first part of the study is that knowledge of the forces affecting the balls and cage of the touchdown bearings can enable a more reliable estimation of the service life of the bearing. Therefore, the second part of the dissertation investigates the forces occurring in the cage and balls of touchdown bearings and introduces two detailed models of touchdown bearings in which all bearing parts are modelled as independent bodies. Two multibody-based two-dimensional models of touchdown bearings are introduced for dynamic analysis of the contact event. All parts of the bearings are modelled with geometrical surfaces, and the bodies interact with each other through elastic contact forces. To assist in identification of the forces affecting the balls and cage in the contact event, the first model describes a touchdown bearing without a cage, and the second model describes a touchdown bearing with a cage. The introduced models are compared with the simplified models used in the first part of the dissertation through parametric study. Damages to the rotor, cage and balls are some of the main reasons for failures of AMB-systems. The stresses in the rotor in the contact event are defined in this work. Furthermore, the forces affecting key bodies of the bearings, cage and balls can be studied using the models of touchdown bearings introduced in this dissertation. Knowledge obtained from the introduced models is valuable since it can enable an optimum structure for a rotor and touchdown bearings to be designed.
Resumo:
Although alcohol problems and alcohol consumption are related, consumption does not fully account for differences in vulnerability to alcohol problems. Therefore, other factors should account for these differences. Based on previous research, it was hypothesized that risky drinking behaviours, illicit and prescription drug use, affect and sex differences would account for differences in vulnerability to alcohol problems while statistically controlling for overall alcohol consumption. Four models were developed that were intended to test the predictive ability of these factors, three of which tested the predictor sets separately and a fourth which tested them in a combined model. In addition, two distinct criterion variables were regressed on the predictors. One was a measure of the frequency that participants experienced negative consequences that they attributed to their drinking and the other was a measure of the extent to which participants perceived themselves to be problem drinkers. Each of the models was tested on four samples from different populations, including fIrst year university students, university students in their graduating year, a clinical sample of people in treatment for addiction, and a community sample of young adults randomly selected from the general population. Overall, support was found for each of the models and each of the predictors in accounting for differences in vulnerability to alcohol problems. In particular, the frequency with which people become intoxicated, frequency of illicit drug use and high levels of negative affect were strong and consistent predictors of vulnerability to alcohol problems across samples and criterion variables. With the exception of the clinical sample, the combined models predicted vulnerability to negative consequences better than vulnerability to problem drinker status. Among the clinical and community samples the combined model predicted problem drinker status better than in the student samples.
Resumo:
The purpose of the current undertaking was to study the electrophysiological properties of the sleep onset period (SOP) in order to gain understanding into the persistent sleep difficulties of those who complain of insomnia following mild traumatic brain injury (MTBI). While many believe that symptoms of post concussion syndrome (PCS) following MTBI resolve within 6 to 12 months, there are a number of people who complain of persistent sleep difficulty. Two models were proposed which hypothesize alternate electrophysiological presentations of the insomnia complaints of those sustaining a MTBI: 1) Analyses of standard polysomnography (PSG) sleep parameters were conducted in order to determine if the sleep difficulties of the MTBI population were similar to that of idiopathic insomniacs (i.e. greater proportion ofREM sleep, reduced delta sleep); 2) Power spectral analysis was conducted over the SOP to determine if the sleep onset signature of those with MTBI would be similar to psychophysiological insomniacs (characterized by increased cortical arousal). Finally, exploratory analyses examined whether the sleep difficulties associated with MTBI could be explained by increases in variability of the power spectral data. Data were collected from 9 individuals who had sustained a MTBI 6 months to 5 years earlier and reported sleep difficulties that had arisen within the month subsequent to injury and persisted to the present. The control group consisted of 9 individuals who had experienced neither sleep difficulties, nor MTBI. Previous to spending 3 consecutive uninterrupted nights in the sleep lab, subjects completed questionnaires regarding sleep difficulties, adaptive functioning, and personality.
Resumo:
The overall objective of this study was to investigate factors associated with long-term survival in axillary node negative (ANN) breast cancer patients. Clinical and biological factors included stage, histopathologic grade, p53 mutation, Her-2/neu amplification, estrogen receptor status (ER), progesterone receptor status (PR) and vascular invasion. Census derived socioeconomic (SES) indicators included median individual and household income, proportions of university educated individuals, housing type, "incidence" of low income and an indicator of living in an affluent neighbourhood. The effects of these measures on breast cancer-specific survival and competing cause survival were investigated. A cohort study examining survival among axillary node negative (ANN) breast cancer patients in the greater Toronto area commenced in 1 989. Patients were followed up until death, lost-to-follow up or study termination in 2004. Data were collected from several sources measuring patient demographics, clinical factors, treatment, recurrence of disease and survival. Census level SES data were collected using census geo-coding of patient addresses' at the time of diagnosis. Additional survival data were acquired from the Ontario Cancer Registry to enhance and extend the observation period of the study. Survival patterns were examined using KaplanMeier and life table procedures. Associations were examined using log-rank and Wilcoxon tests of univariate significance. Multivariate survival analyses were perfonned using Cox proportional hazards models. Analyses were stratified into less than and greater than 5 year survival periods to observe whether known markers of short-tenn survival were also associated with reductions in long-tenn survival among breast cancer patients. The 15 year survival probabilities in this cohort were: for breast cancerspecific survival 0.88, competing causes survival 0.89 and for overall survival 0.78. Estrogen receptor (ER) and progesterone receptor (PR) status (Hazard Ratio (HR) ERIPR- versus ER+/PR+, 8.15,95% CI, 4.74, 14.00), p53 mutation (HR, 3.88, 95% CI, 2.00, 7.53) and Her-2 amplification (HR, 2.66, 95% CI, 1.36, 5.19) were associated with significant reductions in short-tenn breast cancer-specific survival «5 years following diagnosis), however, not with long-term survival in univariate analyses. Stage, histopathologic grade and ERiPR status were the clinicallbiologieal factors that were associated with short-term breast cancer specific survival in multivariate results. Living in an affluent neighbourhood (top quintile of median household income compared to the rest of the population) was associated with the largest significant increase in long-tenn breast cancer-specific survival after adjustment for stage, histopathologic grade and treatment (HR, 0.36, 95% CI, 0.12, 0.89).
Resumo:
Three dimensional model design is a well-known and studied field, with numerous real-world applications. However, the manual construction of these models can often be time-consuming to the average user, despite the advantages o ffered through computational advances. This thesis presents an approach to the design of 3D structures using evolutionary computation and L-systems, which involves the automated production of such designs using a strict set of fitness functions. These functions focus on the geometric properties of the models produced, as well as their quantifiable aesthetic value - a topic which has not been widely investigated with respect to 3D models. New extensions to existing aesthetic measures are discussed and implemented in the presented system in order to produce designs which are visually pleasing. The system itself facilitates the construction of models requiring minimal user initialization and no user-based feedback throughout the evolutionary cycle. The genetic programming evolved models are shown to satisfy multiple criteria, conveying a relationship between their assigned aesthetic value and their perceived aesthetic value. Exploration into the applicability and e ffectiveness of a multi-objective approach to the problem is also presented, with a focus on both performance and visual results. Although subjective, these results o er insight into future applications and study in the fi eld of computational aesthetics and automated structure design.
Resumo:
If you want to know whether a property is true or not in a specific algebraic structure,you need to test that property on the given structure. This can be done by hand, which can be cumbersome and erroneous. In addition, the time consumed in testing depends on the size of the structure where the property is applied. We present an implementation of a system for finding counterexamples and testing properties of models of first-order theories. This system is supposed to provide a convenient and paperless environment for researchers and students investigating or studying such models and algebraic structures in particular. To implement a first-order theory in the system, a suitable first-order language.( and some axioms are required. The components of a language are given by a collection of variables, a set of predicate symbols, and a set of operation symbols. Variables and operation symbols are used to build terms. Terms, predicate symbols, and the usual logical connectives are used to build formulas. A first-order theory now consists of a language together with a set of closed formulas, i.e. formulas without free occurrences of variables. The set of formulas is also called the axioms of the theory. The system uses several different formats to allow the user to specify languages, to define axioms and theories and to create models. Besides the obvious operations and tests on these structures, we have introduced the notion of a functor between classes of models in order to generate more co~plex models from given ones automatically. As an example, we will use the system to create several lattices structures starting from a model of the theory of pre-orders.
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
This thesis examines the performance of Canadian fixed-income mutual funds in the context of an unobservable market factor that affects mutual fund returns. We use various selection and timing models augmented with univariate and multivariate regime-switching structures. These models assume a joint distribution of an unobservable latent variable and fund returns. The fund sample comprises six Canadian value-weighted portfolios with different investing objectives from 1980 to 2011. These are the Canadian fixed-income funds, the Canadian inflation protected fixed-income funds, the Canadian long-term fixed-income funds, the Canadian money market funds, the Canadian short-term fixed-income funds and the high yield fixed-income funds. We find strong evidence that more than one state variable is necessary to explain the dynamics of the returns on Canadian fixed-income funds. For instance, Canadian fixed-income funds clearly show that there are two regimes that can be identified with a turning point during the mid-eighties. This structural break corresponds to an increase in the Canadian bond index from its low values in the early 1980s to its current high values. Other fixed-income funds results show latent state variables that mimic the behaviour of the general economic activity. Generally, we report that Canadian bond fund alphas are negative. In other words, fund managers do not add value through their selection abilities. We find evidence that Canadian fixed-income fund portfolio managers are successful market timers who shift portfolio weights between risky and riskless financial assets according to expected market conditions. Conversely, Canadian inflation protected funds, Canadian long-term fixed-income funds and Canadian money market funds have no market timing ability. We conclude that these managers generally do not have positive performance by actively managing their portfolios. We also report that the Canadian fixed-income fund portfolios perform asymmetrically under different economic regimes. In particular, these portfolio managers demonstrate poorer selection skills during recessions. Finally, we demonstrate that the multivariate regime-switching model is superior to univariate models given the dynamic market conditions and the correlation between fund portfolios.
Resumo:
Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.