897 resultados para estimation and filtering
Resumo:
In previous papers we describe a model for capacity analysis in CDMA systems using DC-Cell, a GIS based planning tool developed at Universidad Politecnica de Valencia, and MATLAB. We show some initial results of that model, and now, we are exploring different parameters like cell size, proximity between cells, number of cells in the system and “clustering” CDMA in order to improve the planning process for third generation systems. In this paper we show the results for variations of some of these parameters, specifically the cell size and number of cells. In CDMA systems is quite common to suppose only one carrier frequency for capacity estimation, and it is intuitive to think that for more base stations, mean more users. However the multiple access interference problem in CDMA systems could establish a limit for that supposition in a similar way that occurs in FDMA and TDMA systems.
Resumo:
Mestrado em Contabilidade e Análise Financeira,
Resumo:
International audience
Resumo:
This work presents a low cost architecture for development of synchronized phasor measurement units (PMU). The device is intended to be connected in the low voltage grid, which allows the monitoring of transmission and distribution networks. Developments of this project include a complete PMU, with instrumentation module for use in low voltage network, GPS module to provide the sync signal and time stamp for the measures, processing unit with the acquisition system, phasor estimation and formatting data according to the standard and finally, communication module for data transmission. For the development and evaluation of the performance of this PMU, it was developed a set of applications in LabVIEW environment with specific features that let analyze the behavior of the measures and identify the sources of error of the PMU, as well as to apply all the tests proposed by the standard. The first application, useful for the development of instrumentation, consists of a function generator integrated with an oscilloscope, which allows the generation and acquisition of signals synchronously, in addition to the handling of samples. The second and main, is the test platform, with capabality of generating all tests provided by the synchronized phasor measurement standard IEEE C37.118.1, allowing store data or make the analysis of the measurements in real time. Finally, a third application was developed to evaluate the results of the tests and generate calibration curves to adjust the PMU. The results include all the tests proposed by synchrophasors standard and an additional test that evaluates the impact of noise. Moreover, through two prototypes connected to the electrical installation of consumers in same distribution circuit, it was obtained monitoring records that allowed the identification of loads in consumer and power quality analysis, beyond the event detection at the distribution and transmission levels.
Resumo:
This study presents a proposal of speed servomechanisms without the use of mechanical sensors (sensorless) using induction motors. A comparison is performed and propose techniques for pet rotor speed, analyzing performance in different conditions of speed and load. For the determination of control technique, initially, is performed an analysis of the technical literature of the main control and speed estimation used, with their characteristics and limitations. The proposed technique for servo sensorless speed induction motor uses indirect field-oriented control (IFOC), composed of four controllers of the proportional-integral type (PI): rotor flux controller, speed controller and current controllers in the direct and quadrature shaft. As the main focus of the work is in the speed control loop was implemented in Matlab the recursive least squares algorithm (RLS) for identification of mechanical parameters, such as moment of inertia and friction coefficient. Thus, the speed of outer loop controller gains can be self adjusted to compensate for any changes in the mechanical parameters. For speed estimation techniques are analyzed: MRAS by rotóricos fluxes MRAS by counter EMF, MRAS by instantaneous reactive power, slip, locked loop phase (PLL) and sliding mode. A proposition of estimation in sliding mode based on speed, which is performed a change in rotor flux observer structure is displayed. To evaluate the techniques are performed theoretical analyzes in Matlab simulation environment and experimental platform in electrical machinery drives. The DSP TMS320F28069 was used for experimental implementation of speed estimation techniques and check the performance of the same in a wide speed range, including load insertion. From this analysis is carried out to implement closed-loop control of sensorless speed IFOC structure. The results demonstrated the real possibility of replacing mechanical sensors for estimation techniques proposed and analyzed. Among these, the estimator based on PLL demonstrated the best performance in various conditions, while the technique based on sliding mode has good capacity estimation in steady state and robustness to parametric variations.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
We introduce a new class of integer-valued self-exciting threshold models, which is based on the binomial autoregressive model of order one as introduced by McKenzie (Water Resour Bull 21:645–650, 1985. doi:10.1111/j.1752-1688.1985. tb05379.x). Basic probabilistic and statistical properties of this class of models are discussed. Moreover, parameter estimation and forecasting are addressed. Finally, the performance of these models is illustrated through a simulation study and an empirical application to a set of measle cases in Germany.
Resumo:
In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the hydrodynamic stability problem associated with the incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the eigenvalue problem in channel and pipe geometries. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to eigenvalue/stability problems. The underlying analysis consists of constructing both a dual eigenvalue problem and a dual problem for the original base solution. In this way, errors stemming from both the numerical approximation of the original nonlinear flow problem, as well as the underlying linear eigenvalue problem are correctly controlled. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.
Resumo:
Dissertação de Mestrado, Aquacultura e Pescas, Especialização em Aquacultura, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
DEA models have been applied as the benchmarking tool in operations management to empirically account operational and productive efficiency. The wide flexibility in assigning the weights in DEA approach can result on indicators of efficiency who do not take account the relative importance of some inputs. In order to overcome this limitation, in this research we apply the DEA model under restricted weight specification. This model is applied to Spanish hotel companies in order to measure operational efficiency. The restricted weight specification enables us to decrease the influence of assigning unrealistic weights in some units and improve the efficiency estimation and to increase the discriminating potential of the conventional DEA model.
Resumo:
This thesis focuses on the dynamics of underactuated cable-driven parallel robots (UACDPRs), including various aspects of robotic theory and practice, such as workspace computation, parameter identification, and trajectory planning. After a brief introduction to CDPRs, UACDPR kinematic and dynamic models are analyzed, under the relevant assumption of inextensible cables. The free oscillatory motion of the end-effector (EE), which is a unique feature of underactuated mechanisms, is studied in detail, from both a kinematic and a dynamic perspective. The free (small) oscillations of the EE around equilibria are proved to be harmonic and the corresponding natural oscillation frequencies are analytically computed. UACDPR workspace computation and analysis are then performed. A new performance index is proposed for the analysis of the influence of actuator errors on cable tensions around equilibrium configurations, and a new type of workspace, called tension-error-insensitive, is defined as the set of poses that a UACDPR EE can statically attain even in presence of actuation errors, while preserving tensions between assigned (positive) bounds. EE free oscillations are then employed to conceive a novel procedure aimed at identifying the EE inertial parameters. This approach does not require the use of force or torque measurements. Moreover, a self-calibration procedure for the experimental determination of UACDPR initial cable lengths is developed, which consequently enables the robot to automatically infer the EE initial pose at machine start-up. Lastly, trajectory planning of UACDPRs is investigated. Two alternative methods are proposed, which aim at (i) reducing EE oscillations even when model parameters are uncertain or (ii) eliminate EE oscillations in case model parameters are perfectly known. EE oscillations are reduced in real-time by dynamically scaling a nominal trajectory and filtering it with an input shaper, whereas they can be eliminated if an off-line trajectory is computed that accounts for the system internal dynamics.
Resumo:
This paper documents the development and findings of the Good Practice Report on Technology-Enhanced Learning and Teaching funded by the Australian Learning and Teaching Council (ALTC). Developing the Good Practice Report required a meta-analysis of 33 ALTC learning and teaching projects relating to technology funded between 2006 and 2010. This report forms one of 12 completed Good Practice Reports on a range of different topics commissioned by the ALTC and Australian Government Office for Learning and Teaching (OLT). The reports aim to reduce issues relating to dissemination that projects face within the sector by providing educators with an efficient and accessible way of engaging with and filtering through the resources and experiences of numerous learning and teaching projects funded by the ALTC and OLT. The Technology-Enhanced Learning and Teaching Report highlights examples of good practice and provides outcomes and recommendations based on the meta-analysis of the relevant learning and teaching projects. However, in order to ensure the value of these reports is realised, educators need to engage with the reports and integrate the information and findings into their practice. The paper concludes by detailing how educational networks can be utilised to support dissemination.