862 resultados para Shadow and Highlight Invariant Algorithm.
Resumo:
This work develops two approaches based on the fuzzy set theory to solve a class of fuzzy mathematical optimization problems with uncertainties in the objective function and in the set of constraints. The first approach is an adaptation of an iterative method that obtains cut levels and later maximizes the membership function of fuzzy decision making using the bound search method. The second one is a metaheuristic approach that adapts a standard genetic algorithm to use fuzzy numbers. Both approaches use a decision criterion called satisfaction level that reaches the best solution in the uncertain environment. Selected examples from the literature are presented to compare and to validate the efficiency of the methods addressed, emphasizing the fuzzy optimization problem in some import-export companies in the south of Spain. © 2012 Brazilian Operations Research Society.
Resumo:
Using an approach based on the Casimir operators of the de Sitter group, conformally invariant equations for a fundamental spin-2 field are obtained, and their consistency is discussed. It is shown that only when the spin-2 field is interpreted as a 1-form assuming values in the Lie algebra of the translation group, rather than a symmetric second-rank tensor, the field equation is both conformally and gauge invariant. © 2013 Pleiades Publishing, Ltd.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this study was to compare the efficacy of a direct clinical evaluation method with an indirect digital photographic method in assessing the quality of dental restorations. Seven parameters (color, occlusal marginal adaptation, anatomy form, roughness, occlusal marginal stain, luster, and secondary caries) were assessed in 89 Class I and Class II restorations from 36 adults using the modified US Public Health Service/Ryge criteria. Standardized photographs of the same restorations were digitally processed by Adobe Photoshop software, separated into the following four groups and assessed by two calibrated examiners: Group A: The original photograph displayed at 100%, without modifications (IMG100); Group B: Formed by images enlarged at 150% (IMG150); Group C: Formed by digital photographs displayed at 100% (mIMG100), with digital modifications (levels adjustment, shadow and highlight correction, color balance, unsharp Mask); and Group D: Formed by enlarged photographs displayed at 150% with modifications (mIMG150), with the same adjustments made to Group C. Photographs were assessed on a calibrated screen (Macbook) by two calibrated clinicians, and the results were statistically analyzed using Wilcoxon tests (SSPS 11.5) at 95% CI. Results: The photographic method produced higher reliability levels than the direct clinical method in all parameters. The evaluation of digital images is more consistent with clinical assessment when restorations present some moderate defect (Bravo) and less consistent when restorations are clinically classified as either satisfactory (Alpha) or in cases of severe defects (Charlie). Conclusion: The digital photographic method is a useful tool for assessing the quality of dental restorations, providing information that goes unnoticed with the visual-tactile clinical examination method. Additionally, when analyzing restorations using the Ryge modified criteria, the digital photographic method reveals a significant increase of defects compared to those clinically observed with the naked eye. Photography by itself, without the need for enlargement or correction, provides more information than clinical examination and can lead to unnecessary overtreatment.
Resumo:
The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.
Resumo:
In this paper, we consider a scenario where 3D scenes are modeled through a View+Depth representation. This representation is to be used at the rendering side to generate synthetic views for free viewpoint video. The encoding of both type of data (view and depth) is carried out using two H.264/AVC encoders. In this scenario we address the reduction of the encoding complexity of depth data. Firstly, an analysis of the Mode Decision and Motion Estimation processes has been conducted for both view and depth sequences, in order to capture the correlation between them. Taking advantage of this correlation, we propose a fast mode decision and motion estimation algorithm for the depth encoding. Results show that the proposed algorithm reduces the computational burden with a negligible loss in terms of quality of the rendered synthetic views. Quality measurements have been conducted using the Video Quality Metric.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.
Resumo:
In this paper we present an algorithm as the combination of a low level morphological operation and model based Global Circular Shortest Path scheme to explore the segmentation of the Right Ventricle. Traditional morphological operations were employed to obtain the region of interest, and adjust it to generate a mask. The image cropped by the mask is then partitioned into a few overlapping regions. Global Circular Shortest Path algorithm is then applied to extract the contour from each partition. The final step is to re-assemble the partitions to create the whole contour. The technique is deemed quite reliable and robust, as this is illustrated by a very good agreement between the extracted contour and the expert manual drawing output.
Resumo:
Spatial generalization skills in school children aged 8-16 were studied with regard to unfamiliar objects that had been previously learned in a cross-modal priming and learning paradigm. We observed a developmental dissociation with younger children recognizing objects only from previously learnt perspectives whereas older children generalized acquired object knowledge to new viewpoints as well. Haptic and - to a lesser extent - visual priming improved spatial generalization in all but the youngest children. The data supports the idea of dissociable, view-dependent and view-invariant object representations with different developmental trajectories that are subject to modulatory effects of priming. Late-developing areas in the parietal or the prefrontal cortex may account for the retarded onset of view-invariant object recognition. © 2006 Elsevier B.V. All rights reserved.
Resumo:
It has been suggested that the deleterious effect of contrast reversal on visual recognition is unique to faces, not objects. Here we show from priming, supervised category learning, and generalization that there is no such thing as general invariance of recognition of non-face objects against contrast reversal and, likewise, changes in direction of illumination. However, when recognition varies with rendering conditions, invariance may be restored, and effects of continuous learning may be reduced, by providing prior object knowledge from active sensation. Our findings suggest that the degree of contrast invariance achieved reflects functional characteristics of object representations learned in a task-dependent fashion.