953 resultados para Euclidean distance model,


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Genética e Melhoramento de Plantas) - FCAV

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: Raman spectroscopy has been employed to discriminate between malignant (basal cell carcinoma [BCC] and melanoma [MEL]) and normal (N) skin tissues in vitro, aimed at developing a method for cancer diagnosis. Background data: Raman spectroscopy is an analytical tool that could be used to diagnose skin cancer rapidly and noninvasively. Methods: Skin biopsy fragments of similar to 2 mm(2) from excisional surgeries were scanned through a Raman spectrometer (830 nm excitation wavelength, 50 to 200 mW of power, and 20 sec exposure time) coupled to a fiber optic Raman probe. Principal component analysis (PCA) and Euclidean distance were employed to develop a discrimination model to classify samples according to histopathology. In this model, we used a set of 145 spectra from N (30 spectra), BCC (96 spectra), and MEL (19 spectra) skin tissues. Results: We demonstrated that principal components (PCs) 1 to 4 accounted for 95.4% of all spectral variation. These PCs have been spectrally correlated to the biochemicals present in tissues, such as proteins, lipids, and melanin. The scores of PC2 and PC3 revealed statistically significant differences among N, BCC, and MEL (ANOVA, p < 0.05) and were used in the discrimination model. A total of 28 out of 30 spectra were correctly diagnosed as N, 93 out of 96 as BCC, and 13 out of 19 as MEL, with an overall accuracy of 92.4%. Conclusions: This discrimination model based on PCA and Euclidean distance could differentiate N from malignant (BCC and MEL) with high sensitivity and specificity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present data set was used as a training set for a Habitat Suitability Model. It contains occurrence (presence-only) of living Lophelia pertusa reefs in the Irish continental margin, which were assembled from databases, cruise reports and publications. A total of 4423 records were inspected and quality assessed to ensure that they (1) represented confirmed living L. pertusa reefs (so excluding 2900 records of dead and isolated coral colony records); (2) were derived from sampling equipment that allows for accurate (<200 m) geo-referencing (so excluding 620 records derived mainly from trawling and dredging activities); and (3) were not duplicated. A total of 245 occurrences were retained for the analysis. Coral observations are highly clustered in regions targeted by research expeditions, which might lead to falsely inflated model evaluation measures (Veloz, 2009). Therefore, we coarsened the distribution data by deleting all but one record within grid cells of 0.02° resolution (Davies & Guinotte 2011). The remaining 53 points were subject to a spatial cross-validation process: a random presence point was chosen, grouped with its 12 closest neighbour presence points based on Euclidean distance and withheld from model training. This process was repeated for all records, resulting in 53 replicates of spatially non-overlapping sets of test (n=13) and training (n=40) data. The final 53 occurrence records were used for model training.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La toma de decisiones en el sector energético se torna compleja frente a las disímiles opciones y objetivos a cumplir. Para minimizar esta complejidad, se han venido desarrollando una gama amplia de métodos de apoyo a la toma de decisiones en proyectos energéticos. En la última década, las energización de comunidades rurales aisladas ha venido siendo prioridad de muchos gobiernos para mitigar las migraciones del campo para la ciudad. Para la toma de decisiones en los proyectos energéticos de comunidades rurales aisladas se necesitan proyectar la influencia que estos tendrás sobre los costes económicos, medioambientales y sociales. Es por esta razón que el presente trabajo tuvo como objetivo diseñar un modelo original denominado Generación Energética Autóctona Y Limpia (GEAYL) aplicado a una comunidad rural aislada de la provincia de Granma en Cuba. Este modelo parte dos modelos que le preceden el PAMER y el SEMA. El modelo GEAYL constituye un procedimiento multicriterio-multiobjetivo de apoyo a la planificación energética para este contexto. Se plantearon cinco funciones objetivos: F1, para la minimización de los costes energéticos; F2 para la minimización de las emisiones de CO2, F3, para la minimización de las emisiones de NOx; F4, para la minimización de las emisiones de SOx (cuyos coeficientes fueron obtenidos a través de la literatura especializada) y F5, para la maximización de la Aceptación Social de la Energía. La función F5 y la manera de obtener sus coeficientes constituye la novedad del presente trabajo. Estos coeficientes se determinaron aplicando el método AHP (Proceso Analítico Jerárquico) con los datos de partidas derivados de una encuesta a los usuarios finales de la energía y a expertos. Para determinar el suministro óptimo de energía se emplearon varios métodos: la suma ponderada, el producto ponderado, las distancias de Manhattan L1, la distancia Euclidea L2 y la distancia L3. Para estas métricas se aplicaron distintos vectores de pesos para determinar las distintas estructuras de preferencias de los decisores. Finalmente, se concluyó que tener en consideración a Aceptación Social de la Energía como una función del modelo influye en el suministro de energía de cada alternativa energética. ABSTRACT Energy planning decision making is a complex task due to the multiple options to follow and objectives to meet. In order to minimize this complexity, a wide variety of methods and supporting tools have been designed. Over the last decade, rural energization has been a priority for many governments, aiming to alleviate rural to urban migration. Rural energy planning decision making must rely on financial, environmental and social costs. The purpose of this work is to define an original energy planning model named Clean and Native Energy Generation (Generación Energética Autóctona Y Limpia, GEAYL), and carry out a case study on Granma Province, Cuba. This model is based on two previous models: PAMER & SEMA. GEAYL is a multiobjective-multicriteria energy planning model, which includes five functions to be optimized: F1, to minimize financial costs; F2, to minimize CO2 emissions; F3, to minimize NOx emissions; F4, to minimize SOx emissions; and F5, to maximize energy Social Acceptability. The coefficients corresponding to the first four functions have been obtained through specialized papers and official data, and the ones belonging to F5 through an Analytic Hierarchy Process (AHP), built as per a statistical enquiry carried out on energy users and experts. F5 and the AHP application are considered to be the novelty of this model. In order to establish the optimal energy supply, several methods have been applied: weighted sum, weighted product, Manhattan distance L1, Euclidean distance L2 and L3. Several weight vectors have been applied to the mentioned distances in order to conclude the decision makers potential preference structure. Among the conclusions of this work, it must be noted that function F5, Social Acceptability, has a clear influence on every energy supply alternative.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Organizational socialization theory and university student retention literature support the concept that social integration influences new recruits' level of satisfaction with the organization and their decision to remain. This three-phase study proposes and tests a Cultural Distance Model of student retention based on Tinto's (1975) Student Integration Model, Louis' (1980) Model of Newcomer Experience, and Kuh and Love's (2000) theory relating cultural distance to departure from the organization. ^ The main proposition tested in this study was that the greater the cultural distance, the greater the likelihood of early departure from the organization. Accordingly, it was inferred that new recruits entering the university culture experience some degree of social and psychological distance. The extent of the distance correspondingly influences satisfaction with the institution and intent to remain for subsequent years. ^ The model was tested through two freshman student surveys designed to examine the effects of cultural distance on non-Hispanics at a predominantly Hispanic, urban, public university. The first survey was administered eight weeks into their first Fall semester and the second at the end of their first year. Student retention was determined through their re-enrollment for the second Fall semester. Path analysis tested the viability of the hypothesis relating cultural distance to satisfaction and retention as suggested in the model. Logistic regression tested the model's predictive power. ^ Correlations among variables were significant, accounting for 54% of variance in students' decisions to return for the second year with 96% prediction accuracy. Initial feelings of high cultural distance were related to increased dissatisfaction with social interactions and institutional choice at the end of the first year and students' intention not to re-enroll. Path analysis results supported the view that the construct of culture distance incorporates both social and psychological distance, and is composed of beliefs of institutional fit with one's cultural expectations, individual comfort with the fit, and the consequent sense of “belonging” or identifying with the institution. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study examines the effects of cultural distance on student retention at an urban, Hispanic-serving university. A Cultural Distance Model based on retention research in higher education and organizational socialization theory is posed and the first half of the model is tested using path analysis with results supporting most model assumptions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super­ resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Marine mammals exploit the efficiency of sound propagation in the marine environment for essential activities like communication and navigation. For this reason, passive acoustics has particularly high potential for marine mammal studies, especially those aimed at population management and conservation. Despite the rapid realization of this potential through a growing number of studies, much crucial information remains unknown or poorly understood. This research attempts to address two key knowledge gaps, using the well-studied bottlenose dolphin (Tursiops truncatus) as a model species, and underwater acoustic recordings collected on four fixed autonomous sensors deployed at multiple locations in Sarasota Bay, Florida, between September 2012 and August 2013. Underwater noise can hinder dolphin communication. The ability of these animals to overcome this obstacle was examined using recorded noise and dolphin whistles. I found that bottlenose dolphins are able to compensate for increased noise in their environment using a wide range of strategies employed in a singular fashion or in various combinations, depending on the frequency content of the noise, noise source, and time of day. These strategies include modifying whistle frequency characteristics, increasing whistle duration, and increasing whistle redundancy. Recordings were also used to evaluate the performance of six recently developed passive acoustic abundance estimation methods, by comparing their results to the true abundance of animals, obtained via a census conducted within the same area and time period. The methods employed were broadly divided into two categories – those involving direct counts of animals, and those involving counts of cues (signature whistles). The animal-based methods were traditional capture-recapture, spatially explicit capture-recapture (SECR), and an approach that blends the “snapshot” method and mark-recapture distance sampling, referred to here as (SMRDS). The cue-based methods were conventional distance sampling (CDS), an acoustic modeling approach involving the use of the passive sonar equation, and SECR. In the latter approach, detection probability was modelled as a function of sound transmission loss, rather than the Euclidean distance typically used. Of these methods, while SMRDS produced the most accurate estimate, SECR demonstrated the greatest potential for broad applicability to other species and locations, with minimal to no auxiliary data, such as distance from sound source to detector(s), which is often difficult to obtain. This was especially true when this method was compared to traditional capture-recapture results, which greatly underestimated abundance, despite attempts to account for major unmodelled heterogeneity. Furthermore, the incorporation of non-Euclidean distance significantly improved model accuracy. The acoustic modelling approach performed similarly to CDS, but both methods also strongly underestimated abundance. In particular, CDS proved to be inefficient. This approach requires at least 3 sensors for localization at a single point. It was also difficult to obtain accurate distances, and the sample size was greatly reduced by the failure to detect some whistles on all three recorders. As a result, this approach is not recommended for marine mammal abundance estimation when few recorders are available, or in high sound attenuation environments with relatively low sample sizes. It is hoped that these results lead to more informed management decisions, and therefore, more effective species conservation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: The aim of this work was to verify the differentiation between normal and pathological human carotid artery tissues by using fluorescence and reflectance spectroscopy in the 400- to 700-nm range and the spectral characterization by means of principal components analysis. Background Data: Atherosclerosis is the most common and serious pathology of the cardiovascular system. Principal components represent the main spectral characteristics that occur within the spectral data and could be used for tissue classification. Materials and Methods: Sixty postmortem carotid artery fragments (26 non-atherosclerotic and 34 atherosclerotic with non-calcified plaques) were studied. The excitation radiation consisted of a 488-nm argon laser. Two 600-mu m core optical fibers were used, one for excitation and one to collect the fluorescence radiation from the samples. The reflectance system was composed of a halogen lamp coupled to an excitation fiber positioned in one of the ports of an integrating sphere that delivered 5 mW to the sample. The photo-reflectance signal was coupled to a 1/4-m spectrograph via an optical fiber. Euclidean distance was then used to classify each principal component score into one of two classes, normal and atherosclerotic tissue, for both fluorescence and reflectance. Results: The principal components analysis allowed classification of the samples with 81% sensitivity and 88% specificity for fluorescence, and 81% sensitivity and 91% specificity for reflectance. Conclusions: Our results showed that principal components analysis could be applied to differentiate between normal and atherosclerotic tissue with high sensitivity and specificity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Studies on the feeding habits of aquatic organisms are a requirement for the management and sustainable use of marine ecosystems. The aim of the present research was to analyze the habits and trophic similarities of decapods, starfish and fish in order to propose trophic relationships between taxa, using Hennigian methods of phylogenetic systematics. This new grouping hypothesis, based on shared and exclusive food items and food types, corresponds to the broad taxonomic groups used in the analysis. Our results indicate that algae, Mollusca, Polychaeta, Crustacea, Echinodermata and Actinopterygii are the most exploited common resources among the species studied. Starfish were differentiated from other organisms for being stenophagic, and were grouped for feeding on bivalve mollusks. A larger group of fish and crustaceans shares algae and mainly crustaceans as food items. A third group united all eight species of Actinopterygii. This largest subgroup of fish is typically carnivorous, feeding on Anthozoa and a great quantity of Crustacea. Synodus foetens has a special position among fishes, due to its unique feeding on nematodes. A Euclidean distance dendrogram obtained in a previous publication grouped S. foetens with starfish. That result was based on a few non-exclusive shared similarities in feeding modes, as well as on shared absences of items, which are not an adequate grouping factor. Starfish are stenophagic, eating bivalves almost exclusively. Synodus foetens and Isopisthus parvipinnis have restricted food items, and are thus intermediary in relation to starfish, decapods, and other fish, which are euryphagous. The trophic cladogram displays details of food items, whether or not shared by all species. The resulting trophic analysis is consistent with known historical relationships.