967 resultados para Feature Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In model-based vision, there are a huge number of possible ways to match model features to image features. In addition to model shape constraints, there are important match-independent constraints that can efficiently reduce the search without the combinatorics of matching. I demonstrate two specific modules in the context of a complete recognition system, Reggie. The first is a region-based grouping mechanism to find groups of image features that are likely to come from a single object. The second is an interpretive matching scheme to make explicit hypotheses about occlusion and instabilities in the image features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous psychophysical experiments have shown an important role for attentional modulations in vision. Behaviorally, allocation of attention can improve performance in object detection and recognition tasks. At the neural level, attention increases firing rates of neurons in visual cortex whose preferred stimulus is currently attended to. However, it is not yet known how these two phenomena are linked, i.e., how the visual system could be "tuned" in a task-dependent fashion to improve task performance. To answer this question, we performed simulations with the HMAX model of object recognition in cortex [45]. We modulated firing rates of model neurons in accordance with experimental results about effects of feature-based attention on single neurons and measured changes in the model's performance in a variety of object recognition tasks. It turned out that recognition performance could only be improved under very limited circumstances and that attentional influences on the process of object recognition per se tend to display a lack of specificity or raise false alarm rates. These observations lead us to postulate a new role for the observed attention-related neural response modulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the tracking of extrema associated with weather systems to be applied to a broad range of fields it is necessary to remove a background field that represents the slowly varying, large spatial scales. The sensitivity of the tracking analysis to the form of background field removed is explored for the Northern Hemisphere winter storm tracks for three contrasting fields from an integration of the U. K. Met Office's (UKMO) Hadley Centre Climate Model (HadAM3). Several methods are explored for the removal of a background field from the simple subtraction of the climatology, to the more sophisticated removal of the planetary scales. Two temporal filters are also considered in the form of a 2-6-day Lanczos filter and a 20-day high-pass Fourier filter. The analysis indicates that the simple subtraction of the climatology tends to change the nature of the systems to the extent that there is a redistribution of the systems relative to the climatological background resulting in very similar statistical distributions for both positive and negative anomalies. The optimal planetary wave filter removes total wavenumbers less than or equal to a number in the range 5-7, resulting in distributions more easily related to particular types of weather system. For the temporal filters the 2-6-day bandpass filter is found to have a detrimental impact on the individual weather systems, resulting in the storm tracks having a weak waveguide type of behavior. The 20-day high-pass temporal filter is less aggressive than the 2-6-day filter and produces results falling between those of the climatological and 2-6-day filters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is essentially twofold: first, to describe the use of spherical nonparametric estimators for determining statistical diagnostic fields from ensembles of feature tracks on a global domain, and second, to report the application of these techniques to data derived from a modern general circulation model. New spherical kernel functions are introduced that are more efficiently computed than the traditional exponential kernels. The data-driven techniques of cross-validation to determine the amount elf smoothing objectively, and adaptive smoothing to vary the smoothing locally, are also considered. Also introduced are techniques for combining seasonal statistical distributions to produce longer-term statistical distributions. Although all calculations are performed globally, only the results for the Northern Hemisphere winter (December, January, February) and Southern Hemisphere winter (June, July, August) cyclonic activity are presented, discussed, and compared with previous studies. Overall, results for the two hemispheric winters are in good agreement with previous studies, both for model-based studies and observational studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Techniques used in a previous study of the objective identification and tracking of meteorological features in model data are extended to the unit sphere. An alternative feature detection scheme is described based on cubic interpolation for the sphere and local maximization. The extension of the tracking technique, used in the previous study, to the unit sphere is described. An example of the application of these techniques to a global relative vorticity field from a model integration are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data from four recent reanalysis projects [ECMWF, NCEP-NCAR, NCEP - Department of Energy ( DOE), NASA] have been diagnosed at the scale of synoptic weather systems using an objective feature tracking method. The tracking statistics indicate that, overall, the reanalyses correspond very well in the Northern Hemisphere (NH) lower troposphere, although differences for the spatial distribution of mean intensities show that the ECMWF reanalysis is systematically stronger in the main storm track regions but weaker around major orographic features. A direct comparison of the track ensembles indicates a number of systems with a broad range of intensities that compare well among the reanalyses. In addition, a number of small-scale weak systems are found that have no correspondence among the reanalyses or that only correspond upon relaxing the matching criteria, indicating possible differences in location and/or temporal coherence. These are distributed throughout the storm tracks, particularly in the regions known for small-scale activity, such as secondary development regions and the Mediterranean. For the Southern Hemisphere (SH), agreement is found to be generally less consistent in the lower troposphere with significant differences in both track density and mean intensity. The systems that correspond between the various reanalyses are considerably reduced and those that do not match span a broad range of storm intensities. Relaxing the matching criteria indicates that there is a larger degree of uncertainty in both the location of systems and their intensities compared with the NH. At upper-tropospheric levels, significant differences in the level of activity occur between the ECMWF reanalysis and the other reanalyses in both the NH and SH winters. This occurs due to a lack of coherence in the apparent propagation of the systems in ERA15 and appears most acute above 500 hPa. This is probably due to the use of optimal interpolation data assimilation in ERA15. Also shown are results based on using the same techniques to diagnose the tropical easterly wave activity. Results indicate that the wave activity is sensitive not only to the resolution and assimilation methods used but also to the model formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat waves are expected to increase in frequency and magnitude with climate change. The first part of a study to produce projections of the effect of future climate change on heat-related mortality is presented. Separate city-specific empirical statistical models that quantify significant relationships between summer daily maximum temperature (T max) and daily heat-related deaths are constructed from historical data for six cities: Boston, Budapest, Dallas, Lisbon, London, and Sydney. ‘Threshold temperatures’ above which heat-related deaths begin to occur are identified. The results demonstrate significantly lower thresholds in ‘cooler’ cities exhibiting lower mean summer temperatures than in ‘warmer’ cities exhibiting higher mean summer temperatures. Analysis of individual ‘heat waves’ illustrates that a greater proportion of mortality is due to mortality displacement in cities with less sensitive temperature–mortality relationships than in those with more sensitive relationships, and that mortality displacement is no longer a feature more than 12 days after the end of the heat wave. Validation techniques through residual and correlation analyses of modelled and observed values and comparisons with other studies indicate that the observed temperature–mortality relationships are represented well by each of the models. The models can therefore be used with confidence to examine future heat-related deaths under various climate change scenarios for the respective cities (presented in Part 2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FT-IR data of six terminally blocked tripeptides containing Acp (epsilon-aminocaproic acid) reveals that all of them form supramolecular beta-sheets in the solid state. Single crystal X-ray diffraction studies of two peptides not only support this data but also disclose the fact that the supramolecular beta-sheet formation is initiated via dimer formation. The Scanning Electron Microscopic images of all peptides exhibit amyloid-like fibrils that show green birefringence after binding with Congo red, which is a characteristic feature of many neurodegenerative disease causing amyloid fibrils. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of motor unit action potential (MUAP) activity from electrornyographic signals is an important stage on neurological investigations that aim to understand the state of the neuromuscular system. In this context, the identification and clustering of MUAPs that exhibit common characteristics, and the assessment of which data features are most relevant for the definition of such cluster structure are central issues. In this paper, we propose the application of an unsupervised Feature Relevance Determination (FRD) method to the analysis of experimental MUAPs obtained from healthy human subjects. In contrast to approaches that require the knowledge of a priori information from the data, this FRD method is embedded on a constrained mixture model, known as Generative Topographic Mapping, which simultaneously performs clustering and visualization of MUAPs. The experimental results of the analysis of a data set consisting of MUAPs measured from the surface of the First Dorsal Interosseous, a hand muscle, indicate that the MUAP features corresponding to the hyperpolarization period in the physisiological process of generation of muscle fibre action potentials are consistently estimated as the most relevant and, therefore, as those that should be paid preferential attention for the interpretation of the MUAP groupings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes to the Northern Hemisphere winter (December, January and February) extratropical storm tracks and cyclones in a warming climate are investigated. Two idealised climate change experiments with HiGEM1.1, a doubled CO2 and a quadrupled CO2 experiment, are compared against a present day control run. An objective feature tracking method is used and a focus given to regional changes. The climatology of extratropical storm tracks from the control run is shown to be in good agreement with ERA-40, while the frequency distribution of cyclone intensity also compares well. In both simulations the mean climate changes are generally consistent with the simulations of the IPCC AR4 models, with a strongly enhanced surface warming at the winter pole and the reduced lower tropospheric warming over the North Atlantic Ocean associated with the slowdown of the Meridional Overturning Circulation. The circulation changes in the North Atlantic are different between the two idealised simulations with different CO2 forcings. In the North Atlantic the storm tracks are influenced by the slowdown of the MOC, the enhanced surface polar warming, and the enhanced upper tropical troposphere warming, giving a north eastward shift of the storm tracks in the 2XCO2 experiment, but no shift in the 4XCO2 experiment. Over the Pacific, in the 2XCO2 experiment, changes in the mean climate are associated with local temperature changes, while in the 4XCO2 experiment the changes in the Pacific are impacted by the weakened tropical circulation. The storm track changes are consistent with the shifts in the zonal wind. Total cyclone numbers are found to decrease over the Northern Hemisphere with increasing CO2 forcing. Changes in cyclone intensity are found using 850hPa vorticity, mean sea level pressure, and 850hPa winds. The intensity of the Northern Hemisphere cyclones is found to decrease relative to the control.