14 resultados para Hamming distance
em Universidad Politécnica de Madrid
Resumo:
El objetivo del presente trabajo de investigación es explorar nuevas técnicas de implementación, basadas en grafos, para las Redes de Neuronas, con el fin de simplificar y optimizar las arquitecturas y la complejidad computacional de las mismas. Hemos centrado nuestra atención en una clase de Red de Neuronas: las Redes de Neuronas Recursivas (RNR), también conocidas como redes de Hopfield. El problema de obtener la matriz sináptica asociada con una RNR imponiendo un determinado número de vectores como puntos fijos, no está en absoluto resuelto, el número de vectores prototipo que pueden ser almacenados en la red, cuando se utiliza la ley de Hebb, es bastante limitado, la red se satura rápidamente cuando se pretende almacenar nuevos prototipos. La ley de Hebb necesita, por tanto, ser revisada. Algunas aproximaciones dirigidas a solventar dicho problema, han sido ya desarrolladas. Nosotros hemos desarrollado una nueva aproximación en la forma de implementar una RNR en orden a solucionar estos problemas. La matriz sináptica es obtenida mediante la superposición de las componentes de los vectores prototipo, sobre los vértices de un Grafo, lo cual puede ser también interpretado como una coloración de dicho grafo. Cuando el periodo de entrenamiento se termina, la matriz de adyacencia del Grafo Resultante o matriz de pesos, presenta ciertas propiedades por las cuales dichas matrices serán llamadas tetraédricas. La energía asociada a cualquier estado de la red es representado por un punto (a,b) de R2. Cada uno de los puntos de energía asociados a estados que disten lo mismo del vector cero está localizado sobre la misma línea de energía de R2. El espacio de vectores de estado puede, por tanto, clasificarse en n clases correspondientes a cada una de las n diferentes distancias que puede tener cualquier vector al vector cero. La matriz (n x n) de pesos puede reducirse a un n-vector; de esta forma, tanto el tiempo de computación como el espacio de memoria requerido par almacenar los pesos, son simplificados y optimizados. En la etapa de recuperación, es introducido un vector de parámetros R2, éste es utilizado para controlar la capacidad de la red: probaremos que lo mayor es la componente a¡, lo menor es el número de puntos fijos pertenecientes a la línea de energía R¡. Una vez que la capacidad de la red ha sido controlada mediante este parámetro, introducimos otro parámetro, definido como la desviación del vector de pesos relativos, este parámetro sirve para disminuir ostensiblemente el número de parásitos. A lo largo de todo el trabajo, hemos ido desarrollando un ejemplo, el cual nos ha servido para ir corroborando los resultados teóricos, los algoritmos están escritos en un pseudocódigo, aunque a su vez han sido implamentados utilizando el paquete Mathematica 2.2., mostrándolos en un volumen suplementario al texto.---ABSTRACT---The aim of the present research is intended to explore new specifícation techniques of Neural Networks based on Graphs to be used in the optimization and simplification of Network Architectures and Computational Complexhy. We have focused our attention in a, well known, class of Neural Networks: the Recursive Neural Networks, also known as Hopfield's Neural Networks. The general problem of constructing the synaptic matrix associated with a Recursive Neural Network imposing some vectors as fixed points is fer for completery solved, the number of prototype vectors (learning patterns) which can be stored by Hebb's law is rather limited and the memory will thus quickly reach saturation if new prototypes are continuously acquired in the course of time. Hebb's law needs thus to be revised in order to allow new prototypes to be stored at the expense of the older ones. Some approaches related with this problem has been developed. We have developed a new approach of implementing a Recursive Neural Network in order to sob/e these kind of problems, the synaptic matrix is obtained superposing the components of the prototype vectors over the vértices of a Graph which may be interpreted as a coloring of the Graph. When training is finished the adjacency matrix of the Resulting Graph or matrix of weights presents certain properties for which it may be called a tetrahedral matrix The energy associated to any possible state of the net is represented as a point (a,b) in R2. Every one of the energy points associated with state-vectors having the same Hamming distance to the zero vector are located over the same energy Une in R2. The state-vector space may be then classified in n classes according to the n different possible distances firom any of the state-vectors to the zero vector The (n x n) matrix of weights may also be reduced to a n-vector of weights, in this way the computational time and the memory space required for obtaining the weights is optimized and simplified. In the recall stage, a parameter vectora is introduced, this parameter is used for controlling the capacity of the net: it may be proved that the bigger is the r, component of J, the lower is the number of fixed points located in the r¡ energy line. Once the capacity of the net has been controlled by the ex parameter, we introduced other parameter, obtained as the relative weight vector deviation parameter, in order to reduce the number of spurious states. All along the present text, we have also developed an example, which serves as a prove for the theoretical results, the algorithms are shown in a pseudocode language in the text, these algorithm so as the graphics have been developed also using the Mathematica 2.2. mathematical package which are shown in a supplementary volume of the text.
Resumo:
The need for the use of another surveillance system when radar cannot be used is the reason for the development of the Multilateration (MLT) Systems. However, there are many systems that operate in the L-Band (960-1215MHz) that could produce interference between systems. At airports, some interference has been detected between transmissions of MLT systems (1030MHz and 1090MHz) and Distance Measuring Equipment (DME) (960-1215MHz).
Resumo:
Many existing engineering works model the statistical characteristics of the entities under study as normal distributions. These models are eventually used for decision making, requiring in practice the definition of the classification region corresponding to the desired confidence level. Surprisingly enough, however, a great amount of computer vision works using multidimensional normal models leave unspecified or fail to establish correct confidence regions due to misconceptions on the features of Gaussian functions or to wrong analogies with the unidimensional case. The resulting regions incur in deviations that can be unacceptable in high-dimensional models. Here we provide a comprehensive derivation of the optimal confidence regions for multivariate normal distributions of arbitrary dimensionality. To this end, firstly we derive the condition for region optimality of general continuous multidimensional distributions, and then we apply it to the widespread case of the normal probability density function. The obtained results are used to analyze the confidence error incurred by previous works related to vision research, showing that deviations caused by wrong regions may turn into unacceptable as dimensionality increases. To support the theoretical analysis, a quantitative example in the context of moving object detection by means of background modeling is given.
Resumo:
High temperatures and relative humidity can compromise animal welfare on the farm level, but less is known about those changes during long distance transport of domestic animals to slaughter. Although upper temperature limits have been established to transport pigs in Europe, few indices include relative or absolute humidity maxima or mention appropriate enthalpy ranges.
Resumo:
This paper analyzes the correlation between the fluctuations of the electrical power generated by the ensemble of 70 DC/AC inverters from a 45.6 MW PV plant. The use of real electrical power time series from a large collection of photovoltaic inverters of a same plant is an impor- tant contribution in the context of models built upon simplified assumptions to overcome the absence of such data. This data set is divided into three different fluctuation categories with a clustering proce- dure which performs correctly with the clearness index and the wavelet variances. Afterwards, the time dependent correlation between the electrical power time series of the inverters is esti- mated with the wavelet transform. The wavelet correlation depends on the distance between the inverters, the wavelet time scales and the daily fluctuation level. Correlation values for time scales below one minute are low without dependence on the daily fluctuation level. For time scales above 20 minutes, positive high correlation values are obtained, and the decay rate with the distance depends on the daily fluctuation level. At intermediate time scales the correlation depends strongly on the daily fluctuation level. The proposed methods have been implemented using free software. Source code is available as supplementary material.
Resumo:
In this work, we analyze the influence of the processing pressure and the substrate–target distance on the synthesis by reactive sputtering of c-axis oriented polycrystalline aluminum nitride thin films deposited on Si(100) wafers. The crystalline quality of AlN has been characterized by high-resolution X-ray diffraction (HR-XRD). The films exhibited a very high degree of c-axis orientation especially when a low process pressure was used. After growth, residual stress measurements obtained indirectly from radius of curvature measurements of the wafer prior and after deposition are also provided. Two different techniques are used to determine the curvature—an optically levered laser beam and a method based on X-ray diffraction. There is a transition from compressive to tensile stress at a processing pressure around 2 mTorr. The transition occurs at different pressures for thin films of different thickness. The degree of c-axis orientation was not affected by the target–substrate distance as it was varied in between 30 and 70 mm.
Resumo:
biomecanica de la natación
Resumo:
Sight distance is of major importance for road safety either when designing new roads or analysing the alignment of existing roads. It is essential that available sight distance in roads is long enough for emergency stops or overtaking manoeuvres. Also, it is vital for engineers/researchers that the tools used for that analysis are both powerful and intuitive. Based on ArcGIS, the application to be presented not only performs an exhaustive sight distance calculation, but allows an accurate analysis of 3D alignment, using all new tools, from a Digital Elevation Model and vehicle trajectory. The software has been successfully utilised to analyse several two-lane rural roads in Spain. In addition, the software produces thematic maps representing sight distance in which supplementary information about crashes, traffic flow, speed or design consistency could be included, allowing traffic safety studies.
Resumo:
Sight distance plays an important role in road traffic safety. Two types of Digital Elevation Models (DEMs) are utilized for the estimation of available sight distance in roads: Digital Terrain Models (DTMs) and Digital Surface Models (DSMs). DTMs, which represent the bare ground surface, are commonly used to determine available sight distance at the design stage. Additionally, the use of DSMs provides further information about elements by the roadsides such as trees, buildings, walls or even traffic signals which may reduce available sight distance. This document analyses the influence of three classes of DEMs in available sight distance estimation. For this purpose, diverse roads within the Region of Madrid (Spain) have been studied using software based on geographic information systems. The study evidences the influence of using each DEM in the outcome as well as the pros and cons of using each model.
Resumo:
The aim of this study was to compare the race characteristics of the start and turn segments of national and regional level swimmers. In the study, 100 and 200-m events were analysed during the finals session of the Open Comunidad de Madrid (Spain) tournament. The “individualized-distance” method with two-dimensional direct linear transformation algorithm was used to perform race analyses. National level swimmers obtained faster velocities in all race segments and stroke comparisons,although significant inter-level differences in start velocity were only obtained in half (8 out of 16) of the analysed events. Higher level swimmers also travelled for longer start and turn distances but only in the race segments where the gain of speed was high. This was observed in the turn segments, in the backstroke and butterfly strokes and during the 200-m breaststroke event, but not in any of the freestyle events. Time improvements due to the appropriate extension of the underwater subsections appeared to be critical for the end race result and should be carefully evaluated by the “individualized-distance” method.
Resumo:
The present paper describes the preliminary stages of the development of a new, comprehensive model conceived to simulate the evacuation of transport airplanes in certification studies. Two previous steps were devoted to implementing an efficient procedure to define the whole geometry of the cabin, and setting up an algorithm for assigning seats to available exits. Now, to clarify the role of the cabin arrangement in the evacuation process, the paper addresses the influence of several restrictions on the seat-to-exit assignment algorithm, maintaining a purely geometrical approach for consistency. Four situations are considered: first, an assignment method without limitations to search the minimum for the total distance run by all passengers along their escaping paths; second, a protocol that restricts the number of evacuees through each exit according to updated FAR 25 capacity; third, a procedure which tends to the best proportional sharing among exits but obliges to each passenger to egress through the nearest fore or rear exits; and fourth, a scenario which includes both restrictions. The four assignment strategies are applied to turboprops, and narrow body and wide body jets. Seat to exit distance and number of evacuees per exit are the main output variables. The results show the influence of airplane size and the impact of non-symmetries and inappropriate matching between size and longitudinal location of exits.
Resumo:
The Institute of Tropical Medicine in Antwerp hereby presents the results of two pilot distance learning training programmes, developed under the umbrella of the AFRICA BUILD project (FP7). The two courses focused on evidence-based medicine (EBM): with the aim of enhancing research and education, via novel approaches and to identify research needs emanating from the field. These pilot experiences, which were run both in English-speaking (Ghana), and French-speaking (Mali and Cameroon) partner institutions, produced targeted courses for the strengthening of research methodology and policy. The courses and related study materials are in the public domain and available through the AFRICA BUILD Portal (http://www.africabuild.eu/taxonomy/term/37); the training modules were delivered live via Dudal webcasts. This paper assesses the success and difficulties of transferring EBM skills with these two specific training programmes, offered through three different approaches: fully online facultative courses, fully online tutor supported courses or through a blended approach with both online and face-to-face sessions. Key factors affecting the selection of participants, the accessibility of the courses, how the learning resources are offered, and how interactive online communities are formed, are evaluated and discussed.
Resumo:
Because of the high number of crashes occurring on highways, it is necessary to intensify the search for new tools that help in understanding their causes. This research explores the use of a geographic information system (GIS) for an integrated analysis, taking into account two accident-related factors: design consistency (DC) (based on vehicle speed) and available sight distance (ASD) (based on visibility). Both factors require specific GIS software add-ins, which are explained. Digital terrain models (DTMs), vehicle paths, road centerlines, a speed prediction model, and crash data are integrated in the GIS. The usefulness of this approach has been assessed through a study of more than 500 crashes. From a regularly spaced grid, the terrain (bare ground) has been modeled through a triangulated irregular network (TIN). The length of the roads analyzed is greater than 100 km. Results have shown that DC and ASD could be related to crashes in approximately 4% of cases. In order to illustrate the potential of GIS, two crashes are fully analyzed: a car rollover after running off road on the right side and a rear-end collision of two moving vehicles. Although this procedure uses two software add-ins that are available only for ArcGIS, the study gives a practical demonstration of the suitability of GIS for conducting integrated studies of road safety.
Resumo:
The study of the temperature gradients in cold stores and containers is a critical issue in the food industry for the quality assurance of products during transport and for minimising losses. This work presents an analysis of the temperatures during the refrigerated transport of 4,320 kg of blueberries in a reefer (set point temperature at ?1ºC) on a container ship from Montevideo (Uruguay) to Verona (Italy). The monitoring was performed by using semi-passive RFID loggers (TurboTag cards). The objective was to carry out a multi-distributed supervision using low-cost, wireless and autonomous sensors for the characterisation of the distribution and spatial gradients of temperatures during a long distance transport. Data analysis shows spatial (phase space) and temporal sequencing diagrams and reveals a significant heterogeneity of temperature at different locations in the container, which highlights the ineffectiveness of a temperature control system based on a single sensor, as is usually done.