965 resultados para Parameter


Relevância:

10.00% 10.00%

Publicador:

Resumo:

While researchers strive to improve automatic face recognition performance, the relationship between image resolution and face recognition performance has not received much attention. This relationship is examined systematically and a framework is developed such that results from super-resolution techniques can be compared. Three super-resolution techniques are compared with the Eigenface and Elastic Bunch Graph Matching face recognition engines. Parameter ranges over which these techniques provide better recognition performance than interpolated images is determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, magnetohydrodynamic natural convection boundary layer flow of an electrically conducting and viscous incompressible fluid along a heated vertical flat plate with uniform heat and mass flux in the presence of strong cross magnetic field has been investigated. For smooth integrations the boundary layer equations are transformed in to a convenient dimensionless form by using stream function formulation as well as the free variable formulation. The nonsimilar parabolic partial differential equations are integrated numerically for Pr ≪1 that is appropriate for liquid metals against the local Hartmann parameter ξ . Further, asymptotic solutions are obtained near the leading edge using regular perturbation method for smaller values of ξ . Solutions for values of ξ ≫ 1 are also obtained by employing the matched asymptotic technique. The results obtained for small, large and all ξ regimes are examined in terms of shear stress, τw, rate of heat transfer, qw, and rate of mass transfer, mw, for important physical parameter. Attention has been given to the influence of Schmidt number, Sc, buoyancy ratio parameter, N and local Hartmann parameter, ξ on velocity, temperature and concentration distributions and noted that velocity and temperature of the fluid achieve their asymptotic profiles for Sc ≥ 10:0.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Numerical simulations for mixed convection of micropolar fluid in an open ended arc-shape cavity have been carried out in this study. Computation is performed using the Alternate Direct Implicit (ADI) method together with the Successive Over Relaxation (SOR) technique for the solution of governing partial differential equations. The flow phenomenon is examined for a range of values of Rayleigh number, 102 ≤ Ra ≤ 106, Prandtl number, 7 ≤ Pr ≤ 50, and Reynolds number, 10 ≤ Re ≤ 100. The study is mainly focused on how the micropolar fluid parameters affect the fluid properties in the flow domain. It was found that despite the reduction of flow in the core region, the heat transfer rate increases, whereas the skin friction and microrotation decrease with the increase in the vortex viscosity parameter, Δ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systems, methods and articles for determining anomalous user activity are disclosed. Data representing a transaction activity corresponding to a plurality of user transactions can be received and user transactions can be grouped according to types of user transactions. The transaction activity can be determined to be anomalous in relation to the grouped user transactions based on a predetermined parameter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from Rényi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a formalism for the analysis of sensitivity of nuclear magnetic resonance pulse sequences to variations of pulse sequence parameters, such as radiofrequency pulses, gradient pulses or evolution delays. The formalism enables the calculation of compact, analytic expressions for the derivatives of the density matrix and the observed signal with respect to the parameters varied. The analysis is based on two constructs computed in the course of modified density-matrix simulations: the error interrogation operators and error commutators. The approach presented is consequently named the Error Commutator Formalism (ECF). It is used to evaluate the sensitivity of the density matrix to parameter variation based on the simulations carried out for the ideal parameters, obviating the need for finite-difference calculations of signal errors. The ECF analysis therefore carries a computational cost comparable to a single density-matrix or product-operator simulation. Its application is illustrated using a number of examples from basic NMR spectroscopy. We show that the strength of the ECF is its ability to provide analytic insights into the propagation of errors through pulse sequences and the behaviour of signal errors under phase cycling. Furthermore, the approach is algorithmic and easily amenable to implementation in the form of a programming code. It is envisaged that it could be incorporated into standard NMR product-operator simulation packages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vertical vegetation is vegetation growing on, or adjacent to, the unused sunlit exterior surfaces of buildings in cities. Vertical vegetation can improve the energy efficiency of the building on which it is installed mainly by insulating, shading and transpiring moisture from foliage and substrate. Several design parameters may affect the extent of the vertical vegetation's improvement of energy performance. Examples are choice of vegetation, growing medium geometry, north/south aspect and others. The purpose of this study is to quantitatively map out the contribution of several parameters to energy savings in a subtropical setting. The method is thermal simulation based on EnergyPlus configured to reflect the special characteristics of vertical vegetation. Thermal simulation results show that yearly cooling energy savings can reach 25% with realistic design choices in subtropical environments. Heating energy savings are negligible. The most important parameter is the aspect of walls covered by vegetation. Vertical vegetation covering walls facing north (south for the northern hemisphere) will result in the highest energy savings. In making plant selections, the most significant parameter is Leaf Area Index (LAI). Plants with larger LAI, preferably LAI>4, contribute to greater savings whereas vertical vegetation with LAI<2 can actually consume energy. The choice of growing media and its thickness influence both heating and cooling energy consumption. Change of growing medium thickness from 6cm to 8cm causes dramatic increase in energy savings from 2% to 18%. For cooling, it is best to use a growing material with high water retention, due to the importance of evapotranspiration for cooling. Similarly, for increased savings in cooling energy, sufficient irrigation is required. Insufficient irrigation results in the vertical vegetation requiring more energy to cool the building. To conclude, the choice of design parameters for vertical vegetation is crucial in making sure that it contributes to energy savings rather than energy consumption. Optimal design decisions can create a dramatic sustainability enhancement for the built environment in subtropical climates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flexible tubular structures fabricated from solution electrospun fibers are finding increasing use in tissue engineering applications. However it is difficult to control the deposition of fibers due to the chaotic nature of the solution electrospinning jet. By using non-conductive polymer melts instead of polymer solutions the path and collection of the fiber becomes predictable. In this work we demonstrate the melt electrospinning of polycaprolactone in a direct writing mode onto a rotating cylinder. This allows the design and fabrication of tubes using 20 μm diameter fibers with controllable micropatterns and mechanical properties. A key design parameter is the fiber winding angle, where it allows control over scaffold pore morphology (e.g. size, shape, number and porosity). Furthermore, the establishment of a finite element model as a predictive design tool is validated against mechanical testing results of melt electrospun tubes to show that a lesser winding angle provides improved mechanical response to uniaxial tension and compression. In addition, we show that melt electrospun tubes support the growth of three different cell types in vitro and are therefore promising scaffolds for tissue engineering applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an approach for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera’s optical center and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. Previous methods for auto-calibration of cameras based on pure rotations fail to work in these two degenerate cases. In addition, our approach includes a modified RANdom SAmple Consensus (RANSAC) algorithm, as well as improved integration of the radial distortion coefficient in the computation of inter-image homographies. We show that these modifications are able to increase the overall efficiency, reliability and accuracy of the homography computation and calibration procedure using both synthetic and real image sequences

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic emission (AE) is the phenomenon where stress waves are generated due to rapid release of energy within a material caused by sources such as crack initiation or growth. AE technique involves recording the stress waves by means of sensors and subsequent analysis of the recorded signals to gather information about the nature of the source. Though AE technique is one of the popular non destructive evaluation (NDE) techniques for structural health monitoring of mechanical, aerospace and civil structures; several challenges still exist in successful application of this technique. Presence of spurious noise signals can mask genuine damage‐related AE signals; hence a major challenge identified is finding ways to discriminate signals from different sources. Analysis of parameters of recorded AE signals, comparison of amplitudes of AE wave modes and investigation of uniqueness of recorded AE signals have been mentioned as possible criteria for source differentiation. This paper reviews common approaches currently in use for source discrimination, particularly focusing on structural health monitoring of civil engineering structural components such as beams; and further investigates the applications of some of these methods by analyzing AE data from laboratory tests.