957 resultados para Minimum Variance Model
Resumo:
We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.
Discriminative language model adaptation for Mandarin broadcast speech transcription and translation
Resumo:
This paper investigates unsupervised test-time adaptation of language models (LM) using discriminative methods for a Mandarin broadcast speech transcription and translation task. A standard approach to adapt interpolated language models to is to optimize the component weights by minimizing the perplexity on supervision data. This is a widely made approximation for language modeling in automatic speech recognition (ASR) systems. For speech translation tasks, it is unclear whether a strong correlation still exists between perplexity and various forms of error cost functions in recognition and translation stages. The proposed minimum Bayes risk (MBR) based approach provides a flexible framework for unsupervised LM adaptation. It generalizes to a variety of forms of recognition and translation error metrics. LM adaptation is performed at the audio document level using either the character error rate (CER), or translation edit rate (TER) as the cost function. An efficient parameter estimation scheme using the extended Baum-Welch (EBW) algorithm is proposed. Experimental results on a state-of-the-art speech recognition and translation system are presented. The MBR adapted language models gave the best recognition and translation performance and reduced the TER score by up to 0.54% absolute. © 2007 IEEE.
Resumo:
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaptation may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences. ©2010 IEEE.
Resumo:
This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.
Resumo:
This paper analyzes whether a minimum wage can be an optimal redistribution policy when distorting taxes and lump-sum transfers are also available in a competitive economy. We build a static general equilibrium model with a Ramsey planner making decisions on taxes, transfers, and minimum wage levels. Workers are assumed to differ only in their productivity. We find that optimal redistribution may imply the use of a minimum wage. The key factor driving our results is the reaction of the demand for low skilled labor to the minimum wage law. Hence, an optimal minimum wage appears to be most likely when low skilled households are scarce, the complementarity between the two types of workers is large or the difference in productivity is small. The main contribution of the paper is a modelling approach that allows us to adopt analysis and solution techniques widely used in recent public finance research. Moreover, this modelling strategy is flexible enough to allow for potential extensions to include dynamics into the model.
Resumo:
This paper presents a model designed to study vertical interactions between wheel and rail when the wheel moves over a rail welding. The model focuses on the spatial domain, and is drawn up in a simple fashion from track receptances. The paper obtains the receptances from a full track model in the frequency domain already developed by the authors, which includes deformation of the rail section and propagation of bending, elongation and torsional waves along an infinite track. Transformation between domains was secured by applying a modified rational fraction polynomials method. This obtains a track model with very few degrees of freedom, and thus with minimum time consumption for integration, with a good match to the original model over a sufficiently broad range of frequencies. Wheel-rail interaction is modelled on a non-linear Hertzian spring, and consideration is given to parametric excitation caused by the wheel moving over a sleeper, since this is a moving wheel model and not a moving irregularity model. The model is used to study the dynamic loads and displacements emerging at the wheel-rail contact passing over a welding defect at different speeds.
Resumo:
Characteristics of supersonic combustion by injecting kerosene vapor into a Mach 2.5 crossflow at various preheat temperatures and pressures were investigated experimentally. A two-stage heating system has been designed and tested, which can prepare heated kerosene of 0.8 kg up to 820 K at pressure of 5.5 Mpa with minimum/negligible fuel coking. In order to simulate the thermophysical properties of kerosene over a wide range of thermodynamic conditions, a three-component surrogate that matches the compound class of the parent fuel was employed. The flow rate of kerosene vapor was calibrated using a sonic nozzle. Computed flow rates using the surrogate fuel are in agreement with the experimental data. Kerosene jets at various preheat temperatures injecting into both quiescent environment and Mach 2.5 crossflow were visualized. It was found that at injection pressure of 4 Mpa and preheat temperature of 550 K the kerosene jet was completely in vapor phase, while keeping almost the same penetration depth as compared to the liquid kerosene injection. Supersonic combustion tests were also carried out to compare the combustor performance for the cases of vaporized kerosene injection, liquid kerosene injection, and effervescent atomization with hydrogen barbotage, under the similar stagnation conditions. Experimental results demonstrated that the use of vaporized kerosene injection leads to better combustor performance. Further parametric study on vaporized kerosene injection in a supersonic model combustor is needed to assess the combustion efficiency as well as to identify the controlling mechanism for the overall combustion enhancement.
Resumo:
ENGLISH: Longline hook rates of bigeye and yellowfin tunas in the eastern Pacific Ocean were standardized by maximum depth of fishing, area, and season, using generalized linear models (GLM's). The annual trends of the standardized hook rates differ from the unstandardized, and are more likely to represent the changes in abundance of tunas in the age groups most vulnerable to longliners in the fishing grounds. For both species all of the interactions in the GLM's involving years, depths of fishing, areas, and seasons were significant. This means that the annual trends in hook rates depend on which depths, areas, and seasons are being considered. The overall average hook rates for each were estimated by weighting each 5-degree quadrangle equally and each season by the number of months in it. Since the annual trends in hook rates for each fishing depth category are roughly the same for bigeye, total average annual hook rate estimates are possible with the GLM. For yellowfin, the situation is less clear because of a preponderance of empty cells in the model. The full models explained 55% of the variation in bigeye hook rate and 33% of that of yellowfin. SPANISH: Se estandardizaron las tasas de captura con palangre de atunes patudo y aleta amarilla en el Océano Pacífico oriental por la profunidad máxima de pesca, área, y temporada, usando modelos lineales generalizados (MLG). Las tendencias anuales de las tasas de captura estandardizadas son diferentes a las de las tasas no estandardizadas, y es más que representen los cambios en la abundancia de los atunes en los grupos de edad más vulnerables a los palangreros en las áreas de pesca. Para ambas especies fueron significativas todas las interacciones en los MLG con año, profundidad de pesca, área, y temporada. Esto significa que las tendencias anuales de las tasas de captura dependen de cuál profundidad, área, y temporado se está considerando. Para la estimación de la tasa de captura general media para cada especie se ponderó cada cuadrángulo de 5 grados igualmente y cada temporada por el número de meses que contiene. Ya que las tendencias anuales en las tasas de captura para cada categoría de profundidad de pesca son aproximadamente iguales para el patudo, son posibles estimaciones de la tasa de captura anual media total con el MLG. En el caso del aleta amarilla, la situación es más confusa, debido a una preponderancia de celdas vacías en el modelo. Los modelos completos explican el 55% de la variación de la tasa de captura de patudo y 33% de la del aleta amarilla. (PDF contains 19 pages.)
Resumo:
The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
Quantum mechanics places limits on the minimum energy of a harmonic oscillator via the ever-present "zero-point" fluctuations of the quantum ground state. Through squeezing, however, it is possible to decrease the noise of a single motional quadrature below the zero-point level as long as noise is added to the orthogonal quadrature. While squeezing below the quantum noise level was achieved decades ago with light, quantum squeezing of the motion of a mechanical resonator is a more difficult prospect due to the large thermal occupations of megahertz-frequency mechanical devices even at typical dilution refrigerator temperatures of ~ 10 mK.
Kronwald, Marquardt, and Clerk (2013) propose a method of squeezing a single quadrature of mechanical motion below the level of its zero-point fluctuations, even when the mechanics starts out with a large thermal occupation. The scheme operates under the framework of cavity optomechanics, where an optical or microwave cavity is coupled to the mechanics in order to control and read out the mechanical state. In the proposal, two pump tones are applied to the cavity, each detuned from the cavity resonance by the mechanical frequency. The pump tones establish and couple the mechanics to a squeezed reservoir, producing arbitrarily-large, steady-state squeezing of the mechanical motion. In this dissertation, I describe two experiments related to the implementation of this proposal in an electromechanical system. I also expand on the theory presented in Kronwald et. al. to include the effects of squeezing in the presence of classical microwave noise, and without assumptions of perfect alignment of the pump frequencies.
In the first experiment, we produce a squeezed thermal state using the method of Kronwald et. al.. We perform back-action evading measurements of the mechanical squeezed state in order to probe the noise in both quadratures of the mechanics. Using this method, we detect single-quadrature fluctuations at the level of 1.09 +/- 0.06 times the quantum zero-point motion.
In the second experiment, we measure the spectral noise of the microwave cavity in the presence of the squeezing tones and fit a full model to the spectrum in order to deduce a quadrature variance of 0.80 +/- 0.03 times the zero-point level. These measurements provide the first evidence of quantum squeezing of motion in a mechanical resonator.
Resumo:
Seasonal trawling was conducted randomly in coastal (depths of 4.6–17 m) waters from St. Augustine, Florida, (29.9°N) to Winyah Bay, South Carolina (33.1°N), during 2000–03, 2008–09, and 2011 to assess annual trends in the relative abundance of sea turtles. A total of 1262 loggerhead sea turtles (Caretta caretta) were captured in 23% (951) of 4207 sampling events. Capture rates (overall and among prevalent 5-cm size classes) were analyzed through the use of a generalized linear model with log link function for the 4097 events that had complete observations for all 25 model parameters. Final models explained 6.6% (70.1–75.0 cm minimum straight-line carapace length [SCLmin]) to 14.9% (75.1–80.0 cm SCLmin) of deviance in the data set. Sampling year, geographic subregion, and distance from shore were retained as significant terms in all final models, and these terms collectively accounted for 6.2% of overall model deviance (range: 4.5–11.7% of variance among 5-cm size classes). We retained 18 parameters only in a subset of final models: 4 as exclusively significant terms, 5 as a mixture of significant or nonsignificant terms, and 9 as exclusively nonsignificant terms. Four parameters also were dropped completely from all final models. The generalized linear model proved appropriate for monitoring trends for this data set that was laden with zero values for catches and was compiled for a globally protected species. Because we could not account for much model deviance, metrics other than those examined in our study may better explain catch variability and, once elucidated, their inclusion in the generalized linear model should improve model fits.
Resumo:
We report a Monte Carlo representation of the long-term inter-annual variability of monthly snowfall on a detailed (1 km) grid of points throughout the southwest. An extension of the local climate model of the southwestern United States (Stamm and Craig 1992) provides spatially based estimates of mean and variance of monthly temperature and precipitation. The mean is the expected value from a canonical regression using independent variables that represent controls on climate in this area, including orography. Variance is computed as the standard error of the prediction and provides site-specific measures of (1) natural sources of variation and (2) errors due to limitations of the data and poor distribution of climate stations. Simulation of monthly temperature and precipitation over a sequence of years is achieved by drawing from a bivariate normal distribution. The conditional expectation of precipitation. given temperature in each month, is the basis of a numerical integration of the normal probability distribution of log precipitation below a threshold temperature (3°C) to determine snowfall as a percent of total precipitation. Snowfall predictions are tested at stations for which long-term records are available. At Donner Memorial State Park (elevation 1811 meters) a 34-year simulation - matching the length of instrumental record - is within 15 percent of observed for mean annual snowfall. We also compute resulting snowpack using a variation of the model of Martinec et al. (1983). This allows additional tests by examining spatial patterns of predicted snowfall and snowpack and their hydrologic implications.
Resumo:
A receding horizon steering controller is presented, capable of pushing an oversteering nonlinear vehicle model to its handling limit while travelling at constant forward speed. The controller is able to optimise the vehicle path, using a computationally efficient and robust technique, so that the vehicle progression along a track is maximised as a function of time. The resultant method forms part of the solution to the motor racing objective of minimising lap time. © 2011 AACC American Automatic Control Council.