931 resultados para Discrete polynomial transforms
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper proposes an alternative geometric framework for analysing the inter-relationship between domestic saving, productivity and income determination in discrete time. The framework provides a means of understanding how low saving economies like the United States sustained high growth rates in the 1990s whereas high saving Japan did not. It also illustrates how the causality between saving and economic activity runs both ways and that discrete changes in national output and income depend on both current and previous accumulation behaviour. The open economy analogue reveals how international capital movements can create external account imbalances that enhance income growth for both borrower and lender economies. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We give conditions on f involving pairs of discrete lower and discrete upper solutions which lead to the existence of at least three solutions of the discrete two-point boundary value problem yk+1 - 2yk + yk-1 + f (k, yk, vk) = 0, for k = 1,..., n - 1, y0 = 0 = yn,, where f is continuous and vk = yk - yk-1, for k = 1,..., n. In the special case f (k, t, p) = f (t) greater than or equal to 0, we give growth conditions on f and apply our general result to show the existence of three positive solutions. We give an example showing this latter result is sharp. Our results extend those of Avery and Peterson and are in the spirit of our results for the continuous analogue. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present an efficient and robust method for calculating state-to-state reaction probabilities utilising the Lanczos algorithm for a real symmetric Hamiltonian. The method recasts the time-independent Artificial Boundary Inhomogeneity technique recently introduced by Jang and Light (J. Chem. Phys. 102 (1995) 3262) into a tridiagonal (Lanczos) representation. The calculation proceeds at the cost of a single Lanczos propagation for each boundary inhomogeneity function and yields all state-to-state probabilities (elastic, inelastic and reactive) over an arbitrary energy range. The method is applied to the collinear H + H-2 reaction and the results demonstrate it is accurate and efficient in comparison with previous calculations. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A discrete protocol for teleportation of superpositions of coherent states of optical-cavity fields is presented. Displacement and parity operators are unconventionally used in Bell-like measurement for field states.
Resumo:
We investigate difference equations which arise as discrete approximations to two-point boundary value problems for systems of second-order, ordinary differential equations. We formulate conditions under which all solutions to the discrete problem satisfy certain a priori bounds which axe independent of the step-size. As a result, the nonexistence of spurious solutions are guaranteed. Some existence and convergence theorems for solutions to the discrete problem are also presented. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Crushing and grinding are the most energy intensive part of the mineral recovery process. A major part of rock size reduction occurs in tumbling mills. Empirical models for the power draw of tumbling mills do not consider the effect of lifters. Discrete element modelling was used to investigate the effect of lifter condition on the power draw of tumbling mill. Results obtained with PFC3D code show that lifter condition will have a significant influence on the power draw and on the mode of energy consumption in the mill. Relatively high lifters will consume less power than low lifters, under otherwise identical conditions. The fraction of the power that will be consumed as friction will increase as the height of the lifters decreases. This will result in less power being used for high intensity comminution caused by the impacts. The fraction of the power that will be used to overcome frictional resistance is determined by the material's coefficient of friction. Based on the modelled results, it appears that the effective coefficient of friction for in situ mill is close to 0.1. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The PFC3D (particle flow code) that models the movement and interaction of particles by the DEM techniques was employed to simulate the particle movement and to calculate the velocity and energy distribution of collision in two types of impact crusher: the Canica vertical shaft crusher and the BJD horizontal shaft swing hammer mill. The distribution of collision energies was then converted into a product size distribution for a particular ore type using JKMRC impact breakage test data. Experimental data of the Canica VSI crusher treating quarry and the BJD hammer mill treating coal were used to verify the DEM simulation results. Upon the DEM procedures being validated, a detailed simulation study was conducted to investigate the effects of the machine design and operational conditions on velocity and energy distributions of collision inside the milling chamber and on the particle breakage behaviour. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The power required to operate large mills is typically 5-10 MW. Hence, optimisation of power consumption will have a significant impact on overall economic performance and environmental impact. Power draw modelling results using the discrete element code PFC3D have been compared with results derived from the widely used empirical Model of Morrell. This is achieved by calculating the power draw for a range of operating conditions for constant mill size and fill factor using two modelling approaches. fThe discrete element modelling results show that, apart from density, selection of the appropriate material damping ratio is critical for the accuracy of modelling of the mill power draw. The relative insensitivity of the power draw to the material stiffness allows selection of moderate stiffness values, which result in acceptable computation time. The results obtained confirm that modelling of the power draw for a vertical slice of the mill, of thickness 20% of the mill length, is a reliable substitute for modelling the full mill. The power draw predictions from PFC3D show good agreement with those obtained using the empirical model. Due to its inherent flexibility, power draw modelling using PFC3D appears to be a viable and attractive alternative to empirical models where necessary code and computer power are available.
Resumo:
Difference equations which may arise as discrete approximations to two-point boundary value problems for systems of second-order, ordinary differential equations are investigated and conditions are formulated under which solutions to the discrete problem are unique. Some existence, uniqueness implies existence, and convergence theorems for solutions to the discrete problem are also presented.
Resumo:
The outer-sphere redox behaviour of a series of [LnCoIII-NCFeII(CN)(5)](-) (L-n = n-membered pentadentate aza-macrocycle) complexes have been studied as a function of pH and oxidising agent. All the dinuclear complexes show a double protonation process at pH approximate to 2 that produces a shift in their UV/Vis spectra. Oxidation of the different non-protonated and diprotonated complexes has been carried out with peroxodisulfate, and of the non-protonated complexes also with trisoxalatocobaltate(III). The results are in agreement with predictions from the Marcus theory. The oxidation of [Fe(phen)(3)](3+) and [IrCl6](2-) is too fast to be measured, although for the latter the transient observation of the process has been achieved at pH = 0. The study of the kinetics of the outer-sphere redox process, with the S2O82- and [Co(ox)(3)](3-) oxidants, has been carried out as a function of pH, temperature, and pressure. As a whole, the values found for the activation volumes, entropies, and enthalpies are in the following margins, for the diprotonated and non-protonated dinuclear complexes, respectively: DeltaV(not equal) from 11 to 13 and 15 to 20 cm(3) mol(-1); DeltaS(not equal) from 110 to 30 and -60 to -90 J K-1 mol(-1); DeltaH(not equal) from 115 to 80 and 50 to 65 kJ.mol(-1). The thermal activation parameters are clearly dominated by the electrostriction occurring on outer-sphere precursor formation, while the trends found for the values of the volume of activation indicate an important degree of tuning due to the charge distribution during the electron transfer process. The special arrangement on the amine ligands in the isomer trans[(L14CoNCFeII)-N-III(CN)(5)](-) accounts for important differences in solvent-assisted hydrogen bonding occurring within the outer-sphere redox process, as has been established in redox reactions of similar compounds. ((C) Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2003).
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
A new high throughput and scalable architecture for unified transform coding in H.264/AVC is proposed in this paper. Such flexible structure is capable of computing all the 4x4 and 2x2 transforms for Ultra High Definition Video (UHDV) applications (4320x7680@ 30fps) in real-time and with low hardware cost. These significantly high performance levels were proven with the implementation of several different configurations of the proposed structure using both FPGA and ASIC 90 nm technologies. In addition, such experimental evaluation also demonstrated the high area efficiency of theproposed architecture, which in terms of Data Throughput per Unit of Area (DTUA) is at least 1.5 times more efficient than its more prominent related designs(1).