813 resultados para Euclidean Distance
Resumo:
The most commonly used method for formally assessing grapheme-colour synaesthesia (i.e., experiencing colours in response to letter and/or number stimuli) involves selecting colours from a large colour palette on several occasions and measuring consistency of the colours selected. However, the ability to diagnose synaesthesia using this method depends on several factors that have not been directly contrasted. These include the type of colour space used (e.g., RGB, HSV, CIELUV, CIELAB) and different measures of consistency (e.g., city block and Euclidean distance in colour space). This study aims to find the most reliable way of diagnosing grapheme-colour synaesthesia based on maximising sensitivity (i.e., ability of a test to identify true synaesthetes) and specificity (i.e., ability of a test to identify true non-synaesthetes). We show, applying ROC (Receiver Operating Characteristics) to binary classification of a large sample of self-declared synaesthetes and non-synaesthetes, that the consistency criterion (i.e., cut-off value) for diagnosing synaesthesia is considerably higher than the current standard in the field. We also show that methods based on perceptual CIELUV and CIELAB colour models (rather than RGB and HSV colour representations) and Euclidean distances offer an even greater sensitivity and specificity than most currently used measures. Together, these findings offer improved heuristics for the behavioural assessment of grapheme-colour synaesthesia.
Resumo:
The present data set was used as a training set for a Habitat Suitability Model. It contains occurrence (presence-only) of living Lophelia pertusa reefs in the Irish continental margin, which were assembled from databases, cruise reports and publications. A total of 4423 records were inspected and quality assessed to ensure that they (1) represented confirmed living L. pertusa reefs (so excluding 2900 records of dead and isolated coral colony records); (2) were derived from sampling equipment that allows for accurate (<200 m) geo-referencing (so excluding 620 records derived mainly from trawling and dredging activities); and (3) were not duplicated. A total of 245 occurrences were retained for the analysis. Coral observations are highly clustered in regions targeted by research expeditions, which might lead to falsely inflated model evaluation measures (Veloz, 2009). Therefore, we coarsened the distribution data by deleting all but one record within grid cells of 0.02° resolution (Davies & Guinotte 2011). The remaining 53 points were subject to a spatial cross-validation process: a random presence point was chosen, grouped with its 12 closest neighbour presence points based on Euclidean distance and withheld from model training. This process was repeated for all records, resulting in 53 replicates of spatially non-overlapping sets of test (n=13) and training (n=40) data. The final 53 occurrence records were used for model training.
Resumo:
Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz–Mancini–Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p < 0.01). In addition, statistically significant differences between MCI subjects and controls were achieved by ED and LMC (p < 0.05). In order to assess the diagnostic ability of the parameters, a linear discriminant analysis with a leave-one-out cross-validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.
Resumo:
Due to the intensive use of mobile phones for diferent purposes, these devices usually contain condential information which must not be accessed by another person apart from the owner of the device. Furthermore, the new generation phones commonly incorporate an accelerometer which may be used to capture the acceleration signals produced as a result of owner s gait. Nowadays, gait identication in basis of acceleration signals is being considered as a new biometric technique which allows blocking the device when another person is carrying it. Although distance based approaches as Euclidean distance or dynamic time warping have been applied to solve this identication problem, they show di±culties when dealing with gaits at diferent speeds. For this reason, in this paper, a method to extract an average template from instances of the gait at diferent velocities is presented. This method has been tested with the gait signals of 34 subjects while walking at diferent motion speeds (slow, normal and fast) and it has shown to improve the performance of Euclidean distance and classical dynamic time warping.
Procedimiento multicriterio-multiobjetivo de planificación energética a comunidades rurales aisladas
Resumo:
La toma de decisiones en el sector energético se torna compleja frente a las disímiles opciones y objetivos a cumplir. Para minimizar esta complejidad, se han venido desarrollando una gama amplia de métodos de apoyo a la toma de decisiones en proyectos energéticos. En la última década, las energización de comunidades rurales aisladas ha venido siendo prioridad de muchos gobiernos para mitigar las migraciones del campo para la ciudad. Para la toma de decisiones en los proyectos energéticos de comunidades rurales aisladas se necesitan proyectar la influencia que estos tendrás sobre los costes económicos, medioambientales y sociales. Es por esta razón que el presente trabajo tuvo como objetivo diseñar un modelo original denominado Generación Energética Autóctona Y Limpia (GEAYL) aplicado a una comunidad rural aislada de la provincia de Granma en Cuba. Este modelo parte dos modelos que le preceden el PAMER y el SEMA. El modelo GEAYL constituye un procedimiento multicriterio-multiobjetivo de apoyo a la planificación energética para este contexto. Se plantearon cinco funciones objetivos: F1, para la minimización de los costes energéticos; F2 para la minimización de las emisiones de CO2, F3, para la minimización de las emisiones de NOx; F4, para la minimización de las emisiones de SOx (cuyos coeficientes fueron obtenidos a través de la literatura especializada) y F5, para la maximización de la Aceptación Social de la Energía. La función F5 y la manera de obtener sus coeficientes constituye la novedad del presente trabajo. Estos coeficientes se determinaron aplicando el método AHP (Proceso Analítico Jerárquico) con los datos de partidas derivados de una encuesta a los usuarios finales de la energía y a expertos. Para determinar el suministro óptimo de energía se emplearon varios métodos: la suma ponderada, el producto ponderado, las distancias de Manhattan L1, la distancia Euclidea L2 y la distancia L3. Para estas métricas se aplicaron distintos vectores de pesos para determinar las distintas estructuras de preferencias de los decisores. Finalmente, se concluyó que tener en consideración a Aceptación Social de la Energía como una función del modelo influye en el suministro de energía de cada alternativa energética. ABSTRACT Energy planning decision making is a complex task due to the multiple options to follow and objectives to meet. In order to minimize this complexity, a wide variety of methods and supporting tools have been designed. Over the last decade, rural energization has been a priority for many governments, aiming to alleviate rural to urban migration. Rural energy planning decision making must rely on financial, environmental and social costs. The purpose of this work is to define an original energy planning model named Clean and Native Energy Generation (Generación Energética Autóctona Y Limpia, GEAYL), and carry out a case study on Granma Province, Cuba. This model is based on two previous models: PAMER & SEMA. GEAYL is a multiobjective-multicriteria energy planning model, which includes five functions to be optimized: F1, to minimize financial costs; F2, to minimize CO2 emissions; F3, to minimize NOx emissions; F4, to minimize SOx emissions; and F5, to maximize energy Social Acceptability. The coefficients corresponding to the first four functions have been obtained through specialized papers and official data, and the ones belonging to F5 through an Analytic Hierarchy Process (AHP), built as per a statistical enquiry carried out on energy users and experts. F5 and the AHP application are considered to be the novelty of this model. In order to establish the optimal energy supply, several methods have been applied: weighted sum, weighted product, Manhattan distance L1, Euclidean distance L2 and L3. Several weight vectors have been applied to the mentioned distances in order to conclude the decision makers potential preference structure. Among the conclusions of this work, it must be noted that function F5, Social Acceptability, has a clear influence on every energy supply alternative.
Resumo:
We studied habitat selection and breeding success in marked populations of a protected seabird (family Alcidae), the marbled murrelet (Brachyramphus marmoratus), in a relatively intact and a heavily logged old-growth forest landscape in south-western Canada. Murrelets used old-growth fragments either proportionately to their size frequency distribution (intact) or they tended to nest in disproportionately smaller fragments (logged). Multiple regression modelling showed that murrelet distribution could be explained by proximity of nests to landscape features producing biotic and abiotic edge effects. Streams, steeper slopes and lower elevations were selected in both landscapes, probably due to good nesting habitat conditions and easier access to nest sites. In the logged landscape, the murrelets nested closer to recent clearcuts than would be expected. Proximity to the ocean was favoured in the intact area. The models of habitat selection had satisfactory discriminatory ability in both landscapes. Breeding success (probability of nest survival to the middle of the chick rearing period), inferred from nest attendance patterns by radio-tagged parents, was modelled in the logged landscape. Survivorship was greater in areas with recent clearcuts and lower in areas with much regrowth, i.e. it was positively correlated with recent habitat fragmentation. We conclude that marbled murrelets can successfully breed in old-growth forests fragmented by logging.
Resumo:
This research aims to set whether is possible to build spatial patterns over oil fields using DFA (Detrended Fluctuation Analysis) of the following well logs: sonic, density, porosity, resistivity and gamma ray. It was employed in the analysis a set of 54 well logs from the oil field of Campos dos Namorados, RJ, Brazil. To check for spatial correlation, it was employed the Mantel test between the matrix of geographic distance and the matrix of the difference of DFA exponents of the well logs. The null hypothesis assumes the absence of spatial structures that means no correlation between the matrix of Euclidean distance and the matrix of DFA differences. Our analysis indicate that the sonic (p=0.18) and the density (p=0.26) were the profiles that show tendency to correlation, or weak correlation. A complementary analysis using contour plot also has suggested that the sonic and the density are the most suitable with geophysical quantities for the construction of spatial structures corroborating the results of Mantel test
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
Resumo:
Marine mammals exploit the efficiency of sound propagation in the marine environment for essential activities like communication and navigation. For this reason, passive acoustics has particularly high potential for marine mammal studies, especially those aimed at population management and conservation. Despite the rapid realization of this potential through a growing number of studies, much crucial information remains unknown or poorly understood. This research attempts to address two key knowledge gaps, using the well-studied bottlenose dolphin (Tursiops truncatus) as a model species, and underwater acoustic recordings collected on four fixed autonomous sensors deployed at multiple locations in Sarasota Bay, Florida, between September 2012 and August 2013. Underwater noise can hinder dolphin communication. The ability of these animals to overcome this obstacle was examined using recorded noise and dolphin whistles. I found that bottlenose dolphins are able to compensate for increased noise in their environment using a wide range of strategies employed in a singular fashion or in various combinations, depending on the frequency content of the noise, noise source, and time of day. These strategies include modifying whistle frequency characteristics, increasing whistle duration, and increasing whistle redundancy. Recordings were also used to evaluate the performance of six recently developed passive acoustic abundance estimation methods, by comparing their results to the true abundance of animals, obtained via a census conducted within the same area and time period. The methods employed were broadly divided into two categories – those involving direct counts of animals, and those involving counts of cues (signature whistles). The animal-based methods were traditional capture-recapture, spatially explicit capture-recapture (SECR), and an approach that blends the “snapshot” method and mark-recapture distance sampling, referred to here as (SMRDS). The cue-based methods were conventional distance sampling (CDS), an acoustic modeling approach involving the use of the passive sonar equation, and SECR. In the latter approach, detection probability was modelled as a function of sound transmission loss, rather than the Euclidean distance typically used. Of these methods, while SMRDS produced the most accurate estimate, SECR demonstrated the greatest potential for broad applicability to other species and locations, with minimal to no auxiliary data, such as distance from sound source to detector(s), which is often difficult to obtain. This was especially true when this method was compared to traditional capture-recapture results, which greatly underestimated abundance, despite attempts to account for major unmodelled heterogeneity. Furthermore, the incorporation of non-Euclidean distance significantly improved model accuracy. The acoustic modelling approach performed similarly to CDS, but both methods also strongly underestimated abundance. In particular, CDS proved to be inefficient. This approach requires at least 3 sensors for localization at a single point. It was also difficult to obtain accurate distances, and the sample size was greatly reduced by the failure to detect some whistles on all three recorders. As a result, this approach is not recommended for marine mammal abundance estimation when few recorders are available, or in high sound attenuation environments with relatively low sample sizes. It is hoped that these results lead to more informed management decisions, and therefore, more effective species conservation.
Resumo:
We consider a mechanical problem concerning a 2D axisymmetric body moving forward on the plane and making slow turns of fixed magnitude about its axis of symmetry. The body moves through a medium of non-interacting particles at rest, and collisions of particles with the body's boundary are perfectly elastic (billiard-like). The body has a blunt nose: a line segment orthogonal to the symmetry axis. It is required to make small cavities with special shape on the nose so as to minimize its aerodynamic resistance. This problem of optimizing the shape of the cavities amounts to a special case of the optimal mass transfer problem on the circle with the transportation cost being the squared Euclidean distance. We find the exact solution for this problem when the amplitude of rotation is smaller than a fixed critical value, and give a numerical solution otherwise. As a by-product, we get explicit description of the solution for a class of optimal transfer problems on the circle.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis
Resumo:
In a paper by Biro et al. [7], a novel twist on guarding in art galleries is introduced. A beacon is a fixed point with an attraction pull that can move points within the polygon. Points move greedily to monotonically decrease their Euclidean distance to the beacon by moving straight towards the beacon or sliding on the edges of the polygon. The beacon attracts a point if the point eventually reaches the beacon. Unlike most variations of the art gallery problem, the beacon attraction has the intriguing property of being asymmetric, leading to separate definitions of attraction region and inverse attraction region. The attraction region of a beacon is the set of points that it attracts. For a given point in the polygon, the inverse attraction region is the set of beacon locations that can attract the point. We first study the characteristics of beacon attraction. We consider the quality of a "successful" beacon attraction and provide an upper bound of $\sqrt{2}$ on the ratio between the length of the beacon trajectory and the length of the geodesic distance in a simple polygon. In addition, we provide an example of a polygon with holes in which this ratio is unbounded. Next we consider the problem of computing the shortest beacon watchtower in a polygonal terrain and present an $O(n \log n)$ time algorithm to solve this problem. In doing this, we introduce $O(n \log n)$ time algorithms to compute the beacon kernel and the inverse beacon kernel in a monotone polygon. We also prove that $\Omega(n \log n)$ time is a lower bound for computing the beacon kernel of a monotone polygon. Finally, we study the inverse attraction region of a point in a simple polygon. We present algorithms to efficiently compute the inverse attraction region of a point for simple, monotone, and terrain polygons with respective time complexities $O(n^2)$, $O(n \log n)$ and $O(n)$. We show that the inverse attraction region of a point in a simple polygon has linear complexity and the problem of computing the inverse attraction region has a lower bound of $\Omega(n \log n)$ in monotone polygons and consequently in simple polygons.
Resumo:
2016
Resumo:
A robust visual tracking system requires an object appearance model that is able to handle occlusion, pose, and illumination variations in the video stream. This can be difficult to accomplish when the model is trained using only a single image. In this paper, we first propose a tracking approach based on affine subspaces (constructed from several images) which are able to accommodate the abovementioned variations. We use affine subspaces not only to represent the object, but also the candidate areas that the object may occupy. We furthermore propose a novel approach to measure affine subspace-to-subspace distance via the use of non-Euclidean geometry of Grassmann manifolds. The tracking problem is then considered as an inference task in a Markov Chain Monte Carlo framework via particle filtering. Quantitative evaluation on challenging video sequences indicates that the proposed approach obtains considerably better performance than several recent state-of-the-art methods such as Tracking-Learning-Detection and MILtrack.