70 resultados para ElGamal, CZK, Multiple discrete logarithm assumption, Extended linear algebra
Resumo:
This paper presents several new families of cumulant-based linear equations with respect to the inverse filter coefficients for deconvolution (equalisation) and identification of nonminimum phase systems. Based on noncausal autoregressive (AR) modeling of the output signals and three theorems, these equations are derived for the cases of 2nd-, 3rd and 4th-order cumulants, respectively, and can be expressed as identical or similar forms. The algorithms constructed from these equations are simpler in form, but can offer more accurate results than the existing methods. Since the inverse filter coefficients are simply the solution of a set of linear equations, their uniqueness can normally be guaranteed. Simulations are presented for the cases of skewed series, unskewed continuous series and unskewed discrete series. The results of these simulations confirm the feasibility and efficiency of the algorithms.
Resumo:
This paper addresses the problem of tracking line segments corresponding to on-line handwritten obtained through a digitizer tablet. The approach is based on Kalman filtering to model linear portions of on-line handwritten, particularly, handwritten numerals, and to detect abrupt changes in handwritten direction underlying a model change. This approach uses a Kalman filter framework constrained by a normalized line equation, where quadratic terms are linearized through a first-order Taylor expansion. The modeling is then carried out under the assumption that the state is deterministic and time-invariant, while the detection relies on double thresholding mechanism which tests for a violation of this assumption. The first threshold is based on an approach of layout kinetics. The second one takes into account the jump in angle between the past observed direction of layout and its current direction. The method proposed enables real-time processing. To illustrate the methodology proposed, some results obtained from handwritten numerals are presented.
Resumo:
Using the integral manifold approach, a composite control—the sum of a fast control and a slow control—is derived for a particular class of non-linear singularly perturbed systems. The fast control is designed completely at the outset, thus ensuring the stability of the fast transients of the system and, furthermore, the existence of the integral manifold. A new method is then presented which simplifies the derivation of a slow control such that the singularly perturbed system meets a preselected design objective to within some specified order of accuracy. Though this approach is, by its very nature, ad hoc, the underlying procedure is easily extended to more general classes of singularly perturbed systems by way of three examples.
Resumo:
A technique is derived for solving a non-linear optimal control problem by iterating on a sequence of simplified problems in linear quadratic form. The technique is designed to achieve the correct solution of the original non-linear optimal control problem in spite of these simplifications. A mixed approach with a discrete performance index and continuous state variable system description is used as the basis of the design, and it is shown how the global problem can be decomposed into local sub-system problems and a co-ordinator within a hierarchical framework. An analysis of the optimality and convergence properties of the algorithm is presented and the effectiveness of the technique is demonstrated using a simulation example with a non-separable performance index.
Resumo:
An algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimization and Parameter Estimation (DISOPE), which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimization procedure. A version of the algorithm with a linear-quadratic model-based problem, implemented in the C+ + programming language, is developed and applied to illustrative simulation examples. An analysis of the optimality and convergence properties of the algorithm is also presented.
Resumo:
The effect of multiple haptic distractors on target selection performance was examined in terms of times to select the target and the associated cursor movement patterns. Two experiments examined: a) The effect of multiple haptic distractors around a single target and b) the effect of inter-item spacing in a linear selection task. It was found that certain target-distractor arrangements hindered performance and that this could be associated with specific, explanatory cursor patterns. In particular, it was found that the presence of distractors along the task axis in front of the target was detrimental to performance, and that there was evidence to suggest that this could sometimes be associated with consequent cursor oscillation between distractors adjacent to a desired target. A further experiment examined the effect of target-distractor spacing in two orientations on a user’s ability to select a target when caught in the gravity well of a distractor. Times for movements in the vertical direction were found to be faster than those in the horizontal direction. In addition, although times for the vertical direction appeared equivalent across five target-distractor distances, times for the horizontal direction exhibited peaks at certain distances. The implications of these results for the design and implementation of haptically enhanced interfaces using the force feedback mouse are discussed.
Resumo:
The problem of calculating the probability of error in a DS/SSMA system has been extensively studied for more than two decades. When random sequences are employed some conditioning must be done before the application of the central limit theorem is attempted, leading to a Gaussian distribution. The authors seek to characterise the multiple access interference as a random-walk with a random number of steps, for random and deterministic sequences. Using results from random-walk theory, they model the interference as a K-distributed random variable and use it to calculate the probability of error in the form of a series, for a DS/SSMA system with a coherent correlation receiver and BPSK modulation under Gaussian noise. The asymptotic properties of the proposed distribution agree with other analyses. This is, to the best of the authors' knowledge, the first attempt to propose a non-Gaussian distribution for the interference. The modelling can be extended to consider multipath fading and general modulation
Resumo:
The idea of incorporating multiple models of linear rheology into a superensemble, to forge a consensus forecast from the individual model predictions, is investigated. The relative importance of the individual models in the so-called multimodel superensemble (MMSE) was inferred by evaluating their performance on a set of experimental training data, via nonlinear regression. The predictive ability of the MMSE model was tested by comparing its predictions on test data that were similar (in-sample) and dissimilar (out-of-sample) to the training data used in the calibration. For the in-sample forecasts, we found that the MMSE model easily outperformed the best constituent model. The presence of good individual models greatly enhanced the MMSE forecast, while the presence of some bad models in the superensemble also improved the MMSE forecast modestly. While the performance of the MMSE model on the out-of-sample training data was not as spectacular, it demonstrated the robustness of this approach.
Resumo:
Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.
Resumo:
Existing numerical characterizations of the optimal income tax have been based on a limited number of model specifications. As a result, they do not reveal which properties are general. We determine the optimal tax in the quasi-linear model under weaker assumptions than have previously been used; in particular, we remove the assumption of a lower bound on the utility of zero consumption and the need to permit negative labor incomes. A Monte Carlo analysis is then conducted in which economies are selected at random and the optimal tax function constructed. The results show that in a significant proportion of economies the marginal tax rate rises at low skills and falls at high. The average tax rate is equally likely to rise or fall with skill at low skill levels, rises in the majority of cases in the centre of the skill range, and falls at high skills. These results are consistent across all the specifications we test. We then extend the analysis to show that these results also hold for Cobb-Douglas utility.
Resumo:
The validity of approximating radiative heating rates in the middle atmosphere by a local linear relaxation to a reference temperature state (i.e., ‘‘Newtonian cooling’’) is investigated. Using radiative heating rate and temperature output from a chemistry–climate model with realistic spatiotemporal variability and realistic chemical and radiative parameterizations, it is found that a linear regressionmodel can capture more than 80% of the variance in longwave heating rates throughout most of the stratosphere and mesosphere, provided that the damping rate is allowed to vary with height, latitude, and season. The linear model describes departures from the climatological mean, not from radiative equilibrium. Photochemical damping rates in the upper stratosphere are similarly diagnosed. Threeimportant exceptions, however, are found.The approximation of linearity breaks down near the edges of the polar vortices in both hemispheres. This nonlinearity can be well captured by including a quadratic term. The use of a scale-independentdamping rate is not well justified in the lower tropical stratosphere because of the presence of a broad spectrum of vertical scales. The local assumption fails entirely during the breakup of the Antarctic vortex, where large fluctuations in temperature near the top of the vortex influence longwave heating rates within the quiescent region below. These results are relevant for mechanistic modeling studies of the middle atmosphere, particularly those investigating the final Antarctic warming.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.
Resumo:
A continuous tropospheric and stratospheric vertically resolved ozone time series, from 1850 to 2099, has been generated to be used as forcing in global climate models that do not include interactive chemistry. A multiple linear regression analysis of SAGE I+II satellite observations and polar ozonesonde measurements is used for the stratospheric zonal mean dataset during the well-observed period from 1979 to 2009. In addition to terms describing the mean annual cycle, the regression includes terms representing equivalent effective stratospheric chlorine (EESC) and the 11-yr solar cycle variability. The EESC regression fit coefficients, together with pre-1979 EESC values, are used to extrapolate the stratospheric ozone time series backward to 1850. While a similar procedure could be used to extrapolate into the future, coupled chemistry climate model (CCM) simulations indicate that future stratospheric ozone abundances are likely to be significantly affected by climate change, and capturing such effects through a regression model approach is not feasible. Therefore, the stratospheric ozone dataset is extended into the future (merged in 2009) with multimodel mean projections from 13 CCMs that performed a simulation until 2099 under the SRES (Special Report on Emission Scenarios) A1B greenhouse gas scenario and the A1 adjusted halogen scenario in the second round of the Chemistry-Climate Model Validation (CCMVal-2) Activity. The stratospheric zonal mean ozone time series is merged with a three-dimensional tropospheric data set extracted from simulations of the past by two CCMs (CAM3.5 and GISSPUCCINI)and of the future by one CCM (CAM3.5). The future tropospheric ozone time series continues the historical CAM3.5 simulation until 2099 following the four different Representative Concentration Pathways (RCPs). Generally good agreement is found between the historical segment of the ozone database and satellite observations, although it should be noted that total column ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23Wm−2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of −0.08Wm−2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of −0.05Wm−2, but which is within the stated range of −0.15 to +0.05Wm−2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1Wm−2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in netCDF Climate and Forecast (CF) Metadata Convention at the PCMDI website (http://cmip-pcmdi.llnl.gov/).
Resumo:
Low variability of crop production from year to year is desirable for many reasons, including reduced income risk and stability of supplies. Therefore, it is important to understand the nature of yield variability, whether it is changing through time, and how it varies between crops and regions. Previous studies have shown that national crop yield variability has changed in the past, with the direction and magnitude dependent on crop type and location. Whilst such studies acknowledge the importance of climate variability in determining yield variability, it has been assumed that its magnitude and its effect on crop production have not changed through time and, hence, that changes to yield variability have been due to non-climatic factors. We address this assumption by jointly examining yield and climate variability for three major crops (rice, wheat and maize) over the past 50 years. National yield time series and growing season temperature and precipitation were de-trended and related using multiple linear regression. Yield variability changed significantly in half of the crop–country combinations examined. For several crop–country combinations, changes in yield variability were related to changes in climate variability.
Resumo:
The discrete Fourier transmission spread OFDM DFTS-OFDM) based single-carrier frequency division multiple access (SC-FDMA) has been widely adopted due to its lower peak-to-average power ratio (PAPR) of transmit signals compared with OFDM. However, the offset modulation, which has lower PAPR than general modulation, cannot be directly applied into the existing SC-FDMA. When pulse-shaping filters are employed to further reduce the envelope fluctuation of transmit signals of SC-FDMA, the spectral efficiency degrades as well. In order to overcome such limitations of conventional SC-FDMA, this paper for the first time investigated cyclic prefixed OQAMOFDM (CP-OQAM-OFDM) based SC-FDMA transmission with adjustable user bandwidth and space-time coding. Firstly, we propose CP-OQAM-OFDM transmission with unequally-spaced subbands. We then apply it to SC-FDMA transmission and propose a SC-FDMA scheme with the following features: a) the transmit signal of each user is offset modulated single-carrier with frequency-domain pulse-shaping; b) the bandwidth of each user is adjustable; c) the spectral efficiency does not decrease with increasing roll-off factors. To combat both inter-symbolinterference and multiple access interference in frequencyselective fading channels, a joint linear minimum mean square error frequency domain equalization using a prior information with low complexity is developed. Subsequently, we construct space-time codes for the proposed SC-FDMA. Simulation results confirm the powerfulness of the proposed CP-OQAM-OFDM scheme (i.e., effective yet with low complexity).