68 resultados para Metric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This technical note studies global asymptotic state synchronization in networks of identical systems. Conditions on the coupling strength required for the synchronization of nodes having a cyclic feedback structure are deduced using incremental dissipativity theory. The method takes advantage of the incremental passivity properties of the constituent subsystems of the network nodes to reformulate the synchronization problem as one of achieving incremental passivity by coupling. The method can be used in the framework of contraction theory to constructively build a contracting metric for the incremental system. The result is illustrated for a network of biochemical oscillators. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop, numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric, and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and, in general, it was difficult to discern clear trends in the data. For the Reynolds-averaged Navier-Stokes (RANS) methods, the choice of turbulence model appeared to be the largest factor in solution accuracy. Scale-resolving methods, such as large-eddy simulation (LES), hybrid RANS/LES, and direct numerical simulation, produced error levels similar to RANS methods but provided superior predictions of normal stresses. Copyright © 2012 by Daniella E. Raveh and Michael Iovnovich.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Powering electronics without depending on batteries is an open research field. Mechanical vibrations prove to be a reliable energy source, but low-frequency broadband vibrations cannot be harvested effectively using linear oscillators. This article discusses an alternative for harvesting such vibrations, with energy harvesters with two stable configurations. The challenges related to nonlinear dynamics are briefly discussed. Different existing designs of bistable energy harvesters are presented and classified, according to their feasibility for miniaturization. A general dynamic model for those designs is described. Finally, an extensive discussion on quantitative measures of evaluating the effectiveness of energy harvesters is accomplished, resulting in the proposition of a new dimensionless metric suited for a broadband analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the problem of state estimation over a communication network. Using estimation quality as a metric, two communication schemes are studied and compared. In scheme one, each sensor node communicates its measurement data to the remote estimator, while in scheme two, each sensor node communicates its local state estimate to the remote estimator. We show that with perfect communication link, if the sensor has unlimited computation capability, the two schemes produce the same estimate at the estimator, and if the sensor has limited computation capability, scheme one is always better than scheme two. On the other hand, when data packet drops occur over the communication link, we show that if the sensor has unlimited computation capability, scheme two always outperforms scheme one, and if the sensor has limited computation capability, we show that in general there exists a critical packet arrival rate, above which scheme one outperforms scheme two. Simulations are provided to demonstrate the two schemes under various circumstances. © South China University of Technology and Academy of Mathematics and Systems Science, CAS and Springer-Verlag Berlin Heidelberg 2010.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The physical meaning and methods of determining loudness were reviewed Loudness is a psychoacoustic metric which closely corresponds to the perceived intensity of a sound stimulus. It can be determined by graphical procedures, numerical methods, or by commercial software. These methods typically require the consideration of the 1/3 octave band spectrum of the sound of interest. The sounds considered in this paper are a 1 kHz tone and pink noise. The loudness of these sounds was calculated in eight ways using different combinations of input data and calculation methods. All the methods considered are based on Zwicker loudness. It was determined that, of the combinations considered, only the commercial software dBSonic and the loudness calculation procedure detailed in DIN 45631 using 1/3 octave band levels filtered using ANSI S1.11-1986 gave the correct values of loudness for a 1 kHz tone. Comparing the results between the sources also demonstrated the difference between sound pressure level and loudness. It was apparent that the calculation and filtering methods must be considered together, as a given calculation will produce different results for different 1/3 octave band input. In the literature reviewed, no reference provided a guide to the selection of the type of filtering that should be used in conjunction with the loudness computation method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The change in acoustic characteristics in personal computers to console gaming and home entertainment systems with the change in the Graphics Processing Unit (GPU), is presented. The tests are carried out using identical configurations of the software and system hardware. The prime components of the hardware used in the project are central processing unit, motherboard, hard disc drive, memory, power supply, optical drive, and additional cooling system. The results from the measurements taken for each GPU tested are analyzed and compared. The test results are obtained using a photo tachometer and reflective tape adhered to one particular fan blade. The test shows that loudness is a psychoacoustic metric developed by Zwicker and Fastal that aims to quantify how loud a sound is perceived as compared to a standard sound. The acoustic experiment reveals that the inherent noise generation mechanism increases with the increase of the complexity of the cooling solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Marginal utility theory prescribes the relationship between the objective property of the magnitude of rewards and their subjective value. Despite its pervasive influence, however, there is remarkably little direct empirical evidence for such a theory of value, let alone of its neurobiological basis. We show that human preferences in an intertemporal choice task are best described by a model that integrates marginally diminishing utility with temporal discounting. Using functional magnetic resonance imaging, we show that activity in the dorsal striatum encodes both the marginal utility of rewards, over and above that which can be described by their magnitude alone, and the discounting associated with increasing time. In addition, our data show that dorsal striatum may be involved in integrating subjective valuation systems inherent to time and magnitude, thereby providing an overall metric of value used to guide choice behavior. Furthermore, during choice, we show that anterior cingulate activity correlates with the degree of difficulty associated with dissonance between value and time. Our data support an integrative architecture for decision making, revealing the neural representation of distinct subcomponents of value that may contribute to impulsivity and decisiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimating the financial value of pain informs issues as diverse as the market price of analgesics, the cost-effectiveness of clinical treatments, compensation for injury, and the response to public hazards. Such valuations are assumed to reflect a stable trade-off between relief of discomfort and money. Here, using an auction-based health-market experiment, we show that the price people pay for relief of pain is strongly determined by the local context of the market, that is, by recent intensities of pain or immediately disposable income (but not overall wealth). The absence of a stable valuation metric suggests that the dynamic behavior of health markets is not predictable from the static behavior of individuals. We conclude that the results follow the dynamics of habit-formation models of economic theory, and thus, this study provides the first scientific basis for this type of preference modeling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full-and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. © 1992-2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of positive observer design for positive systems defined on solid cones in Banach spaces. The design is based on the Hilbert metric and convergence properties are analyzed in the light of the Birkhoff theorem. Two main applications are discussed: positive observers for systems defined in the positive orthant, and positive observers on the cone of positive semi-definite matrices with a view on quantum systems. © 2011 IEEE.