928 resultados para Discrete-time control


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Only recently, during the past five years, consumer electronics has been evolving rapidly. Many products have started to include “smart home” capabilities, enabling communication and interoperability of various smart devices. Even more devices and sensors can be remote controlled and monitored through cloud services. While the smart home systems have become very affordable to average consumer compared to the early solutions decades ago, there are still many issues and things that need to be fixed or improved upon: energy efficiency, connectivity with other devices and applications, security and privacy concerns, reliability, and response time. This paper focuses on designing Internet of Things (IoT) node and platform architectures that take these issues into account, notes other currently used solutions, and selects technologies in order to provide better solution. The node architecture aims for energy efficiency and modularity, while the platform architecture goals are in scalability, portability, maintainability, performance, and modularity. Moreover, the platform architecture attempts to improve user experience by providing higher reliability and lower response time compared to the alternative platforms. The architectures were developed iteratively using a development process involving research, planning, design, implementation, testing, and analysis. Additionally, they were documented using Kruchten’s 4+1 view model, which is used to describe the use cases and different views of the architectures. The node architecture consisted of energy efficient hardware, FC3180 microprocessor and CC2520 RF transceiver, modular operating system, Contiki, and a communication protocol, AllJoyn, used for providing better interoperability with other IoT devices and applications. The platform architecture provided reliable low response time control, monitoring, and initial setup capabilities by utilizing web technologies on various devices such as smart phones, tablets, and computers. Furthermore, an optional cloud service was provided in order to control devices and monitor sensors remotely by utilizing scalable high performance technologies in the backend enabling low response time and high reliability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A dissertation submitted in fulfillment of the requirements to the degree of Master in Computer Science and Computer Engineering

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this study was to gain an understanding of the effects of population heterogeneity, missing data, and causal relationships on parameter estimates from statistical models when analyzing change in medication use. From a public health perspective, two timely topics were addressed: the use and effects of statins in populations in primary prevention of cardiovascular disease and polypharmacy in older population. Growth mixture models were applied to characterize the accumulation of cardiovascular and diabetes medications among apparently healthy population of statin initiators. The causal effect of statin adherence on the incidence of acute cardiovascular events was estimated using marginal structural models in comparison with discrete-time hazards models. The impact of missing data on the growth estimates of evolution of polypharmacy was examined comparing statistical models under different assumptions for missing data mechanism. The data came from Finnish administrative registers and from the population-based Geriatric Multidisciplinary Strategy for the Good Care of the Elderly study conducted in Kuopio, Finland, during 2004–07. Five distinct patterns of accumulating medications emerged among the population of apparently healthy statin initiators during two years after statin initiation. Proper accounting for time-varying dependencies between adherence to statins and confounders using marginal structural models produced comparable estimation results with those from a discrete-time hazards model. Missing data mechanism was shown to be a key component when estimating the evolution of polypharmacy among older persons. In conclusion, population heterogeneity, missing data and causal relationships are important aspects in longitudinal studies that associate with the study question and should be critically assessed when performing statistical analyses. Analyses should be supplemented with sensitivity analyses towards model assumptions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Queueing theory provides models, structural insights, problem solutions and algorithms to many application areas. Due to its practical applicability to production, manufacturing, home automation, communications technology, etc, more and more complex systems requires more elaborated models, tech- niques, algorithm, etc. need to be developed. Discrete-time models are very suitable in many situations and a feature that makes the analysis of discrete time systems technically more involved than its continuous time counterparts. In this paper we consider a discrete-time queueing system were failures in the server can occur as-well as priority messages. The possibility of failures of the server with general life time distribution is considered. We carry out an extensive study of the system by computing generating functions for the steady-state distribution of the number of messages in the queue and in the system. We also obtain generating functions for the stationary distribution of the busy period and sojourn times of a message in the server and in the system. Performance measures of the system are also provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Doutoramento em Matemática

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first paper sheds light on the informational content of high frequency data and daily data. I assess the economic value of the two family models comparing their performance in forecasting asset volatility through the Value at Risk metric. In running the comparison this paper introduces two key assumptions: jumps in prices and leverage effect in volatility dynamics. Findings suggest that high frequency data models do not exhibit a superior performance over daily data models. In the second paper, building on Majewski et al. (2015), I propose an affine-discrete time model, labeled VARG-J, which is characterized by a multifactor volatility specification. In the VARG-J model volatility experiences periods of extreme movements through a jump factor modeled as an Autoregressive Gamma Zero process. The estimation under historical measure is done by quasi-maximum likelihood and the Extended Kalman Filter. This strategy allows to filter out both volatility factors introducing a measurement equation that relates the Realized Volatility to latent volatility. The risk premia parameters are calibrated using call options written on S&P500 Index. The results clearly illustrate the important contribution of the jump factor in the pricing performance of options and the economic significance of the volatility jump risk premia. In the third paper, I analyze whether there is empirical evidence of contagion at the bank level, measuring the direction and the size of contagion transmission between European markets. In order to understand and quantify the contagion transmission on banking market, I estimate the econometric model by Aït-Sahalia et al. (2015) in which contagion is defined as the within and between countries transmission of shocks and asset returns are directly modeled as a Hawkes jump diffusion process. The empirical analysis indicates that there is a clear evidence of contagion from Greece to European countries as well as self-contagion in all countries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the past years, ray tracing (RT) models popularity has been increasing. From the nineties, RT has been used for field prediction in environment such as indoor and urban environments. Nevertheless, with the advent of new technologies, the channel model has become decidedly more dynamic and to perform RT simulations at each discrete time instant become computationally expensive. In this thesis, a new dynamic ray tracing (DRT) approach is presented in which from a single ray tracing simulation at an initial time t0, through analytical formulas we are able to track the motion of the interaction points. The benefits that this approach bring are that Doppler frequencies and channel prediction can be derived at every time instant, without recurring to multiple RT runs and therefore shortening the computation time. DRT performance was studied on two case studies and the results shows the accuracy and the computational gain that derives from this approach. Another issue that has been addressed in this thesis is the licensed band exhaustion of some frequency bands. To deal with this problem, a novel unselfish spectrum leasing scheme in cognitive radio networks (CRNs) is proposed that offers an energy-efficient solution minimizing the environmental impact of the network. In addition, a network management architecture is introduced and resource allocation is proposed as a constrained sum energy efficiency maximization problem. System simulations demonstrate an increment in the energy efficiency of the primary users’ network compared with previously proposed algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless power transfer is becoming a crucial and demanding task in the IoT world. Despite the already known solutions exploiting a near-field powering approach, far-field WPT is definitely more challenging, and commercial applications are not available yet. This thesis proposes the recent frequency-diverse array technology as a potential candidate for realizing smart and reconfigurable far-field WPT solutions. In the first section of this work, an analysis on some FDA systems is performed, identifying the planar array with circular geometry as the most promising layout in terms of radiation properties. Then, a novel energy aware solution to handle the critical time variability of the FDA beam pattern is proposed. It consists on a time-control strategy through a triangular pulse, and it allows to achieve ad-hoc and real time WPT. Moreover, an essential frequency domain analysis of the radiating behaviour of a pulsed FDA system is presented. This study highlights the benefits of exploiting the intrinsic pulse harmonics for powering purposes, thus minimising the power loss. Later, the electromagnetic design of a radial FDA architecture is addressed. In this context, an exhaustive investigation on miniaturization techniques is carried out; the use of multiple shorting pins together with a meandered feeding network has been selected as a powerful solution to halve the original prototype dimension. Finally, accurate simulations of the designed radial FDA system are performed, and the obtained results are given.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.