932 resultados para ExternaI Time-varying Reference Consumption Level
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
The idea of spacecraft formations, flying in tight configurations with maximum baselines of a few hundred meters in low-Earth orbits, has generated widespread interest over the last several years. Nevertheless, controlling the movement of spacecraft in formation poses difficulties, such as in-orbit high-computing demand and collision avoidance capabilities, which escalate as the number of units in the formation is increased and complicated nonlinear effects are imposed to the dynamics, together with uncertainty which may arise from the lack of knowledge of system parameters. These requirements have led to the need of reliable linear and nonlinear controllers in terms of relative and absolute dynamics. The objective of this thesis is, therefore, to introduce new control methods to allow spacecraft in formation, with circular/elliptical reference orbits, to efficiently execute safe autonomous manoeuvres. These controllers distinguish from the bulk of literature in that they merge guidance laws never applied before to spacecraft formation flying and collision avoidance capacities into a single control strategy. For this purpose, three control schemes are presented: linear optimal regulation, linear optimal estimation and adaptive nonlinear control. In general terms, the proposed control approaches command the dynamical performance of one or several followers with respect to a leader to asymptotically track a time-varying nominal trajectory (TVNT), while the threat of collision between the followers is reduced by repelling accelerations obtained from the collision avoidance scheme during the periods of closest proximity. Linear optimal regulation is achieved through a Riccati-based tracking controller. Within this control strategy, the controller provides guidance and tracking toward a desired TVNT, optimizing fuel consumption by Riccati procedure using a non-infinite cost function defined in terms of the desired TVNT, while repelling accelerations generated from the CAS will ensure evasive actions between the elements of the formation. The relative dynamics model, suitable for circular and eccentric low-Earth reference orbits, is based on the Tschauner and Hempel equations, and includes a control input and a nonlinear term corresponding to the CAS repelling accelerations. Linear optimal estimation is built on the forward-in-time separation principle. This controller encompasses two stages: regulation and estimation. The first stage requires the design of a full state feedback controller using the state vector reconstructed by means of the estimator. The second stage requires the design of an additional dynamical system, the estimator, to obtain the states which cannot be measured in order to approximately reconstruct the full state vector. Then, the separation principle states that an observer built for a known input can also be used to estimate the state of the system and to generate the control input. This allows the design of the observer and the feedback independently, by exploiting the advantages of linear quadratic regulator theory, in order to estimate the states of a dynamical system with model and sensor uncertainty. The relative dynamics is described with the linear system used in the previous controller, with a control input and nonlinearities entering via the repelling accelerations from the CAS during collision avoidance events. Moreover, sensor uncertainty is added to the control process by considering carrier-phase differential GPS (CDGPS) velocity measurement error. An adaptive control law capable of delivering superior closed-loop performance when compared to the certainty-equivalence (CE) adaptive controllers is finally presented. A novel noncertainty-equivalence controller based on the Immersion and Invariance paradigm for close-manoeuvring spacecraft formation flying in both circular and elliptical low-Earth reference orbits is introduced. The proposed control scheme achieves stabilization by immersing the plant dynamics into a target dynamical system (or manifold) that captures the desired dynamical behaviour. They key feature of this methodology is the addition of a new term to the classical certainty-equivalence control approach that, in conjunction with the parameter update law, is designed to achieve adaptive stabilization. This parameter has the ultimate task of shaping the manifold into which the adaptive system is immersed. The performance of the controller is proven stable via a Lyapunov-based analysis and Barbalat’s lemma. In order to evaluate the design of the controllers, test cases based on the physical and orbital features of the Prototype Research Instruments and Space Mission Technology Advancement (PRISMA) are implemented, extending the number of elements in the formation into scenarios with reconfigurations and on-orbit position switching in elliptical low-Earth reference orbits. An extensive analysis and comparison of the performance of the controllers in terms of total Δv and fuel consumption, with and without the effects of the CAS, is presented. These results show that the three proposed controllers allow the followers to asymptotically track the desired nominal trajectory and, additionally, those simulations including CAS show an effective decrease of collision risk during the performance of the manoeuvre.
Resumo:
In this paper, we present a fuzzy approach to the Reed-Frost model for epidemic spreading taking into account uncertainties in the diagnostic of the infection. The heterogeneities in the infected group is based on the clinical signals of the individuals (symptoms, laboratorial exams, medical findings, etc.), which are incorporated into the dynamic of the epidemic. The infectivity level is time-varying and the classification of the individuals is performed through fuzzy relations. Simulations considering a real problem with data of the viral epidemic in a children daycare are performed and the results are compared with a stochastic Reed-Frost generalization
Resumo:
An order of magnitude sensitivity gain is described for using quasar spectra to investigate possible time or space variation in the fine structure constant alpha. Applied to a sample of 30 absorption systems, spanning redshifts 0.5 < z < 1.6, we derive limits on variations in alpha over a wide range of epochs. For the whole sample, Delta alpha/alpha = (-1.1 +/- 0.4) x 10(-5). This deviation is dominated by measurements at z > 1, where Delta alpha/alpha = (-1.9 +/- 0.5) x 10(-5). For z < 1, Delta alpha/alpha = (-0.2 +/- 0.4) x 10(-5). While this is consistent with a time-varying alpha, further work is required to explore possible systematic errors in the data, although careful searches have so far revealed none.
Resumo:
Background. Many resource-limited countries rely on clinical and immunological monitoring without routine virological monitoring for human immunodeficiency virus (HIV)-infected children receiving highly active antiretroviral therapy (HAART). We assessed whether HIV load had independent predictive value in the presence of immunological and clinical data for the occurrence of new World Health Organization (WHO) stage 3 or 4 events (hereafter, WHO events) among HIV-infected children receiving HAART in Latin America. Methods. The NISDI (Eunice Kennedy Shriver National Institute of Child Health and Human Development International Site Development Initiative) Pediatric Protocol is an observational cohort study designed to describe HIV-related outcomes among infected children. Eligibility criteria for this analysis included perinatal infection, age ! 15 years, and continuous HAART for >= 6 months. Cox proportional hazards modeling was used to assess time to new WHO events as a function of immunological status, viral load, hemoglobin level, and potential confounding variables; laboratory tests repeated during the study were treated as time-varying predictors. Results. The mean duration of follow-up was 2.5 years; new WHO events occurred in 92 (15.8%) of 584 children. In proportional hazards modeling, most recent viral load 15000 copies/mL was associated with a nearly doubled risk of developing a WHO event (adjusted hazard ratio, 1.81; 95% confidence interval, 1.05-3.11; P = 033), even after adjustment for immunological status defined on the basis of CD4 T lymphocyte value, hemoglobin level, age, and body mass index. Conclusions. Routine virological monitoring using the WHO virological failure threshold of 5000 copies/mL adds independent predictive value to immunological and clinical assessments for identification of children receiving HAART who are at risk for significant HIV-related illness. To provide optimal care, periodic virological monitoring should be considered for all settings that provide HAART to children.
Resumo:
In this paper, we present a fuzzy approach to the Reed-Frost model for epidemic spreading taking into account uncertainties in the diagnostic of the infection. The heterogeneities in the infected group is based on the clinical signals of the individuals (symptoms, laboratorial exams, medical findings, etc.), which are incorporated into the dynamic of the epidemic. The infectivity level is time-varying and the classification of the individuals is performed through fuzzy relations. Simulations considering a real problem with data of the viral epidemic in a children daycare are performed and the results are compared with a stochastic Reed-Frost generalization.
Influence of magnetically-induced E-fields on cardiac electric activity during MRI: A modeling study
Resumo:
In modern magnetic resonance imaging (MRI), patients are exposed to strong, time-varying gradient magnetic fields that may be able to induce electric fields (E-fields)/currents in tissues approaching the level of physiological significance. In this work we present theoretical investigations into induced E-fields in the thorax, and evaluate their potential influence on cardiac electric activity under the assumption that the sites of maximum E-field correspond to the myocardial stimulation threshold (an abnormal circumstance). Whole-body cylindrical and planar gradient coils were included in the model. The calculations of the induced fields are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, whole-body model. The potential for cardiac stimulation was evaluated using an electrical model of the heart. Twelve-lead electrocardiogram (ECG) signals were simulated and inspected for arrhythmias caused by the applied fields for both healthy and diseased hearts. The simulations show that the shape of the thorax and the conductive paths significantly influence induced E-fields. In healthy patients, these fields are not sufficient to elicit serious arrhythmias with the use of contemporary gradient sets. However, raising the strength and number of repeated switching episodes of gradients, as is certainly possible in local chest gradient sets, could expose patients to increased risk. For patients with cardiac disease, the risk factors are elevated. By the use of this model, the sensitivity of cardiac pathologies, such as abnormal conductive pathways, to the induced fields generated by an MRI sequence can be investigated. (C) 2003 Wiley-Liss, Inc.
Resumo:
This paper examines the performance of Portuguese equity funds investing in the domestic and in the European Union market, using several unconditional and conditional multi-factor models. In terms of overall performance, we find that National funds are neutral performers, while European Union funds under-perform the market significantly. These results do not seem to be a consequence of management fees. Overall, our findings are supportive of the robustness of conditional multi-factor models. In fact, Portuguese equity funds seem to be relatively more exposed to smallcaps and more value-oriented. Also, they present strong evidence of time-varying betas and, in the case of the European Union funds, of time-varying alphas too. Finally, in terms of market timing, our tests suggest that mutual fund managers in our sample do not exhibit any market timing abilities. Nevertheless, we find some evidence of timevarying conditional market timing abilities but only at the individual fund level.
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
A presente dissertação insere-se no âmbito da unidade curricular “ Dissertação” do 2º ano do mestrado em Engenharia Eletrotécnica – Sistemas Elétricos de Energia. Com o aumento crescente do número de consumidores de energia, é cada vez mais imperioso a adoção de medidas de racionalização e gestão dos consumos da energia elétrica. Existem diferentes tipos de dificuldades no planeamento e implementação de novas centrais produtoras de energia renovável, pelo que também por este motivo é cada vez mais importante adoção de medidas de gestão de consumos, quer ao nível dos clientes alimentados em média tensão como de baixa tensão. Desta forma será mais acessível a criação de padrões de eficiência energética elevados em toda a rede de distribuição de energia elétrica. Também a economia é afetada por uma fraca gestão dos consumos por parte dos clientes. Elevados desperdícios energéticos levam a que mais energia tenha que ser produzida, energia essa que contribui ainda mais para a elevada taxa de dependência energética em Portugal, e para o degradar da economia nacional. Coloca-se assim a necessidade de implementar planos e métodos que promovam a eficiência energética e a gestão racional de consumos de energia elétrica. Apresenta-se nesta dissertação várias propostas, algumas na forma de projetos já em execução, que visam sensibilizar o consumidor para a importância da utilização eficiente de energia e, ao mesmo tempo, disponibilizam as ferramentas tecnológicas adequadas para auxiliar a implementação dos métodos propostos. Embora os planos apresentados, sobejamente conhecidos, tenham imensa importância, a implementação nos vários consumidores de sistemas capazes de efetivamente reduzir consumos tem um papel fundamental. Equipamentos de gestão de consumos, que são apresentados nesta dissertação, permitem ao consumidor aceder diretamente ao seu consumo. Podem aceder não apenas ao consumo global da instalação mas também ao consumo específico por equipamento, permitindo perceber onde se verifica a situação mais desfavorável. Funcionalidades de programação de perfis tipo, com limitações de potência em vários períodos horários, bem como possibilidades de controlo remoto com recurso a aplicações para Smartphones permitem a redução de consumos ao nível da rede de distribuição e, desta forma, contribuir para a redução dos desperdícios e da dependência energética em Portugal. No âmbito do trabalho de dissertação é desenvolvida uma metodologia de comercialização de potência, que é apresentada nesta tese. Esta metodologia propõem que o consumidor, em função dos seus consumos, pague apenas a quantidade de potência que efetivamente necessita num certo período de tempo. Assim, o consumidor deixa de pagar uma tarifa mensal fixa associada á sua potência contratada, e passará a pagar um valor correspondente apenas à potência que efetivamente solicitou em todas as horas durante o mês. Nesta metodologia que é apresentada, o consumidor poderá também fazer uma análise do seu diagrama de cargas e simular uma alteração da sua tarifa, tarifa esta que varia entre tarifa simples, bi-horária semanal, bi-horária diária, tri-horária semanal ou tri-horária diária, de forma a perceber em qual destas pagará um menor valor pela mesma energia. De forma a que o consumidor possa perceber se haverá vantagem de uma alteração para uma potência contratada flexível, ou para uma outra tarifa associada á energia, tem ao seu dispor uma ferramenta, que em função dos seus consumos, permite retirar conclusões sobre o preço final a pagar na fatura, após cada tipo de alteração. Esta ferramenta foi validada com recurso a várias simulações, para diferentes perfis de consumidores. Desta forma, o utilizador fica a perceber que realmente pode poupar com uma potência contratada flexível, ao mesmo tempo que pode identificar-se com um perfil de simulação e, mais facilmente, perceber para que alteração tarifária pode usufruir de uma maior poupança.
Resumo:
The momentum anomaly has been widely documented in the literature. However, there are still many issues where there is no consensus and puzzles left unexplained. One is that strategies based on momentum present a level of risk that is inconsistent with the diversification that it offers. Moreover, recent studies indicate that this risk is variable over time and mostly strategy-specific. This work project hypothesises and proves that this evidence is explained by the portfolio constitution of the momentum strategy over time, namely the covariance and correlation between companies in the top and down deciles and across them.
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
We develop tests of the proportional hazards assumption, with respect to a continuous covariate, in the presence of unobserved heterogeneity with unknown distribution at the individual observation level. The proposed tests are specially powerful against ordered alternatives useful for modeling non-proportional hazards situations. By contrast to the case when the heterogeneity distribution is known up to …nite dimensional parameters, the null hypothesis for the current problem is similar to a test for absence of covariate dependence. However, the two testing problems di¤er in the nature of relevant alternative hypotheses. We develop tests for both the problems against ordered alternatives. Small sample performance and an application to real data highlight the usefulness of the framework and methodology.
Resumo:
VAR methods have been used to model the inter-relationships between inflows and outfl ows into unemployment and vacancies using tools such as impulse response analysis. In order to investigate whether such impulse responses change over the course of the business cycle or or over time, this paper uses TVP-VARs for US and Canadian data. For the US, we find interesting differences between the most recent recession and earlier recessions and expansions. In particular, we find the immediate effect of a negative shock on both in ow and out flow hazards to be larger in 2008 than in earlier times. Furthermore, the effect of this shock takes longer to decay. For Canada, we fi nd less evidence of time-variation in impulse responses.
Resumo:
This thesis focuses on theoretical asset pricing models and their empirical applications. I aim to investigate the following noteworthy problems: i) if the relationship between asset prices and investors' propensities to gamble and to fear disaster is time varying, ii) if the conflicting evidence for the firm and market level skewness can be explained by downside risk, Hi) if costly learning drives liquidity risk. Moreover, empirical tests support the above assumptions and provide novel findings in asset pricing, investment decisions, and firms' funding liquidity. The first chapter considers a partial equilibrium model where investors have heterogeneous propensities to gamble and fear disaster. Skewness preference represents the desire to gamble, while kurtosis aversion represents fear of extreme returns. Using US data from 1988 to 2012, my model demonstrates that in bad times, risk aversion is higher, more people fear disaster, and fewer people gamble, in contrast to good times. This leads to a new empirical finding: gambling preference has a greater impact on asset prices during market downturns than during booms. The second chapter consists of two essays. The first essay introduces a foramula based on conditional CAPM for decomposing the market skewness. We find that the major market upward and downward movements can be well preadicted by the asymmetric comovement of betas, which is characterized by an indicator called "Systematic Downside Risk" (SDR). We find that SDR can efafectively forecast future stock market movements and we obtain out-of-sample R-squares (compared with a strategy using historical mean) of more than 2.27% with monthly data. The second essay reconciles a well-known empirical fact: aggregating positively skewed firm returns leads to negatively skewed market return. We reconcile this fact through firms' greater response to negative maraket news than positive market news. We also propose several market return predictors, such as downside idiosyncratic skewness. The third chapter studies the funding liquidity risk based on a general equialibrium model which features two agents: one entrepreneur and one external investor. Only the investor needs to acquire information to estimate the unobservable fundamentals driving the economic outputs. The novelty is that information acquisition is more costly in bad times than in good times, i.e. counter-cyclical information cost, as supported by previous empirical evidence. Later we show that liquidity risks are principally driven by costly learning. Résumé Cette thèse présente des modèles théoriques dévaluation des actifs et leurs applications empiriques. Mon objectif est d'étudier les problèmes suivants: la relation entre l'évaluation des actifs et les tendances des investisseurs à parier et à crainadre le désastre varie selon le temps ; les indications contraires pour l'entreprise et l'asymétrie des niveaux de marché peuvent être expliquées par les risques de perte en cas de baisse; l'apprentissage coûteux augmente le risque de liquidité. En outre, des tests empiriques confirment les suppositions ci-dessus et fournissent de nouvelles découvertes en ce qui concerne l'évaluation des actifs, les décisions relatives aux investissements et la liquidité de financement des entreprises. Le premier chapitre examine un modèle d'équilibre où les investisseurs ont des tendances hétérogènes à parier et à craindre le désastre. La préférence asymétrique représente le désir de parier, alors que le kurtosis d'aversion représente la crainte du désastre. En utilisant les données des Etats-Unis de 1988 à 2012, mon modèle démontre que dans les mauvaises périodes, l'aversion du risque est plus grande, plus de gens craignent le désastre et moins de gens parient, conatrairement aux bonnes périodes. Ceci mène à une nouvelle découverte empirique: la préférence relative au pari a un plus grand impact sur les évaluations des actifs durant les ralentissements de marché que durant les booms économiques. Exploitant uniquement cette relation générera un revenu excédentaire annuel de 7,74% qui n'est pas expliqué par les modèles factoriels populaires. Le second chapitre comprend deux essais. Le premier essai introduit une foramule base sur le CAPM conditionnel pour décomposer l'asymétrie du marché. Nous avons découvert que les mouvements de hausses et de baisses majeures du marché peuvent être prédits par les mouvements communs des bêtas. Un inadicateur appelé Systematic Downside Risk, SDR (risque de ralentissement systématique) est créé pour caractériser cette asymétrie dans les mouvements communs des bêtas. Nous avons découvert que le risque de ralentissement systématique peut prévoir les prochains mouvements des marchés boursiers de manière efficace, et nous obtenons des carrés R hors échantillon (comparés avec une stratégie utilisant des moyens historiques) de plus de 2,272% avec des données mensuelles. Un investisseur qui évalue le marché en utilisant le risque de ralentissement systématique aurait obtenu une forte hausse du ratio de 0,206. Le second essai fait cadrer un fait empirique bien connu dans l'asymétrie des niveaux de march et d'entreprise, le total des revenus des entreprises positiveament asymétriques conduit à un revenu de marché négativement asymétrique. Nous décomposons l'asymétrie des revenus du marché au niveau de l'entreprise et faisons cadrer ce fait par une plus grande réaction des entreprises aux nouvelles négatives du marché qu'aux nouvelles positives du marché. Cette décomposition révélé plusieurs variables de revenus de marché efficaces tels que l'asymétrie caractéristique pondérée par la volatilité ainsi que l'asymétrie caractéristique de ralentissement. Le troisième chapitre fournit une nouvelle base théorique pour les problèmes de liquidité qui varient selon le temps au sein d'un environnement de marché incomplet. Nous proposons un modèle d'équilibre général avec deux agents: un entrepreneur et un investisseur externe. Seul l'investisseur a besoin de connaitre le véritable état de l'entreprise, par conséquent, les informations de paiement coutent de l'argent. La nouveauté est que l'acquisition de l'information coute plus cher durant les mauvaises périodes que durant les bonnes périodes, comme cela a été confirmé par de précédentes expériences. Lorsque la récession comamence, l'apprentissage coûteux fait augmenter les primes de liquidité causant un problème d'évaporation de liquidité, comme cela a été aussi confirmé par de précédentes expériences.