851 resultados para Discrete-time singular systems
Resumo:
Using the method of quantum trajectories we show that a known pure state can be optimally monitored through time when subject to a sequence of discrete measurements. By modifying the way that we extract information from the measurement apparatus we can minimize the average algorithmic information of the measurement record, without changing the unconditional evolution of the measured system. We define an optimal measurement scheme as one which has the lowest average algorithmic information allowed. We also show how it is possible to extract information about system operator averages from the measurement records and their probabilities. The optimal measurement scheme, in the limit of weak coupling, determines the statistics of the variance of the measured variable directly. We discuss the relevance of such measurements for recent experiments in quantum optics.
Resumo:
The aim of the present study was to evaluate the genetic correlations among real-time ultrasound carcass, BW, and scrotal circumference (SC) traits in Nelore cattle. Carcass traits, measured by real-time ultrasound of the live animal, were recorded from 2002 to 2004 on 10 farms across 6 Brazilian states on 2,590 males and females ranging in age from 450 to 599 d. Ultrasound records of LM area (LMA) and backfat thickness (BF) were obtained from cross-sectional images between the 12th and 13th ribs, and rump fat thickness (RF) was measured between the hook and pin bones over the junction between gluteus medius and biceps femoris muscles. Also, BW (n = 22,778) and SC ( n = 5,695) were recorded on animals born between 1998 and 2003. The BW traits were 120, 210, 365, 450, and 550-d standardized BW (W120, W210, W365, W450, and W550), plus BW (WS) and hip height (HH) on the ultrasound scanning date. The SC traits were 365-, 450-, and 550-d standardized SC (SC365, SC450, and SC550). For the BW and SC traits, the database used was from the Nelore Breeding Program-Nelore Brazil. The genetic parameters were estimated with multivariate animal models and REML. Estimated genetic correlations between LMA and other traits were 0.06 (BF), -0.04 ( RF), 0.05 (HH), 0.58 (WS), 0.53 (W120), 0.62 (W210), 0.67 (W365), 0.64 ( W450 and W550), 0.28 (SC365), 0.24 (SC450), and 0.00 ( SC550). Estimated genetic correlations between BF and with other traits were 0.74 ( RF), -0.32 (HH), 0.19 (WS), -0.03 (W120), -0.10 (W210), 0.04 (W365), 0.01 (W450), 0.06 ( W550), 0.17 (SC365 and SC450), and -0.19 (SC550). Estimated genetic correlations between RF and other traits were -0.41 (HH), -0.09 (WS), -0.13 ( W120), -0.09 ( W210), -0.01 ( W365), 0.02 (W450), 0.03 (W550), 0.05 ( SC365), 0.11 ( SC450), and -0.18 (SC550). These estimates indicate that selection for carcass traits measured by real-time ultrasound should not cause antagonism in the genetic improvement of SC and BW traits. Also, selection to increase HH might decrease subcutaneous fat as correlated response. Therefore, to obtain animals suited to specific tropical production systems, carcass, BW, and SC traits should be considered in selection programs.
Resumo:
Objective. To evaluate the influence of shaft design on the shaping ability of 3 rotary nickel-titanium (NiTi) systems. Study design. Sixty curved mesial canals of mandibular molars were used. Specimens were scanned by spiral tomography before and after canal preparation using ProTaper, ProFile, and ProSystem GT rotary instruments. One-millimeter-thick slices were scanned from the apical end point to the pulp chamber. The cross-sectional images from the slices taken earlier and after canal preparation at the apical, coronal, and midroot levels were compared. Results. The mean working time was 137.22 +/- 5.15 s. Mean transportation, mean centering ratio, and percentage of area increase were 0.022 +/- 0.131 mm, 0.21 +/- 0.11, and 76.90 +/- 42.27%, respectively, with no statistical differences (P > .05). Conclusions. All instruments were able to shape curved mesial canals in mandibular molars to size 30 without significant errors. The differences in shaft designs seemed not to affect their shaping capabilities.
Resumo:
The catalytic properties of enzymes are usually evaluated by measuring and analyzing reaction rates. However, analyzing the complete time course can be advantageous because it contains additional information about the properties of the enzyme. Moreover, for systems that are not at steady state, the analysis of time courses is the preferred method. One of the major barriers to the wide application of time courses is that it may be computationally more difficult to extract information from these experiments. Here the basic approach to analyzing time courses is described, together with some examples of the essential computer code to implement these analyses. A general method that can be applied to both steady state and non-steady-state systems is recommended. (C) 2001 academic Press.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Background. Although digital and videotaped images are known to be comparable for the evaluation of left ventricular function, their relative accuracy for assessment of more complex anatomy is unclear. We sought to compare reading time, storage costs, and concordance of video and digital interpretations across multiple observers and sites. Methods. One hundred one patients with valvular (90 mitral, 48 aortic, 80 tricuspid) disease were selected prospectively, and studies were stored according to video and standardized digital protocols. The same reviewer interpreted video and digital images independently and at different times with the use of a standard report form to evaluate 40 items (eg, severity of stenosis or regurgitation, leaflet thickening, and calcification) as normal or mildly, moderately, or severely abnormal Concordance between modalities was expressed at kappa Major discordance (difference of >1 level of severity) was ascribed to the modality that gave the lesser severity. CD-ROM was used to store digital data (20:1 lossy compression), and super-VHS video-tape was used to store video data The reading time and storage costs for each modality were compared Results. Measured parameters were highly concordant (ejection fraction was 52% +/- 13% by both). Major discordance was rare, and lesser values were reported with digital rather than video interpretation in the categories of aortic and mitral valve thicken ing (1% to 2%) and severity of mitral regurgitation (2%). Digital reading time was 6.8 +/- 2.4 minutes, 38% shorter than with video (11.0 +/- 3.0, range 8 to 22 minutes, P < .001). Compressed digital studies had an average size of 60 <plus/minus> 14 megabytes (range 26 to 96 megabytes). Storage cost for video was A$0.62 per patient (18 studies per tape, total cost A$11.20), compared with A$0.31 per patient for digital storage (8 studies per CD-ROM, total cost A$2.50). Conclusion. Digital and video interpretation were highly concordant; in the few cases of major discordance, the digital scores were lower, perhaps reflecting undersampling. Use of additional views and longer clips may be indicated to minimize discordance with video in patients with complex problems. Digital interpretation offers a significant reduction in reading times and the cost of archiving.
Resumo:
We compare the performance of two different low-storage filter diagonalisation (LSFD) strategies in the calculation of complex resonance energies of the HO2, radical. The first is carried out within a complex-symmetric Lanczos subspace representation [H. Zhang, S.C. Smith, Phys. Chem. Chem. Phys. 3 (2001) 2281]. The second involves harmonic inversion of a real autocorrelation function obtained via a damped Chebychev recursion [V.A. Mandelshtam, H.S. Taylor, J. Chem. Phys. 107 (1997) 6756]. We find that while the Chebychev approach has the advantage of utilizing real algebra in the time-consuming process of generating the vector recursion, the Lanczos, method (using complex vectors) requires fewer iterations, especially for low-energy part of the spectrum. The overall efficiency in calculating resonances for these two methods is comparable for this challenging system. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Some efficient solution techniques for solving models of noncatalytic gas-solid and fluid-solid reactions are presented. These models include those with non-constant diffusivities for which the formulation reduces to that of a convection-diffusion problem. A singular perturbation problem results for such models in the presence of a large Thiele modulus, for which the classical numerical methods can present difficulties. For the convection-diffusion like case, the time-dependent partial differential equations are transformed by a semi-discrete Petrov-Galerkin finite element method into a system of ordinary differential equations of the initial-value type that can be readily solved. In the presence of a constant diffusivity, in slab geometry the convection-like terms are absent, and the combination of a fitted mesh finite difference method with a predictor-corrector method is used to solve the problem. Both the methods are found to converge, and general reaction rate forms can be treated. These methods are simple and highly efficient for arbitrary particle geometry and parameters, including a large Thiele modulus. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Research on the stability of flavours during high temperature extrusion cooking is reviewed. The important factors that affect flavour and aroma retention during the process of extrusion are illustrated. A substantial number of flavour volatiles which are incorporated prior to extrusion are normally lost during expansion, this is because of steam distillation. Therefore, a general practice has been to introduce a flavour mix after the extrusion process. This extra operation requires a binding agent (normally oil), and may also result in a non-uniform distribution of the flavour and low oxidative stability of the flavours exposed on the surface. Therefore, the importance of encapsulated flavours, particularly the beta -cyclodextrin-flavour complex, is highlighted in this paper.
Resumo:
A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Applied econometricians often fail to impose economic regularity constraints in the exact form economic theory prescribes. We show how the Singular Value Decomposition (SVD) Theorem and Markov Chain Monte Carlo (MCMC) methods can be used to rigorously impose time- and firm-varying equality and inequality constraints. To illustrate the technique we estimate a system of translog input demand functions subject to all the constraints implied by economic theory, including observation-varying symmetry and concavity constraints. Results are presented in the form of characteristics of the estimated posterior distributions of functions of the parameters. Copyright (C) 2001 John Wiley Sons, Ltd.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.