9 resultados para units package

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high integration density of current nanometer technologies allows the implementation of complex floating-point applications in a single FPGA. In this work the intrinsic complexity of floating-point operators is addressed targeting configurable devices and making design decisions providing the most suitable performance-standard compliance trade-offs. A set of floating-point libraries composed of adder/subtracter, multiplier, divisor, square root, exponential, logarithm and power function are presented. Each library has been designed taking into account special characteristics of current FPGAs, and with this purpose we have adapted the IEEE floating-point standard (software-oriented) to a custom FPGA-oriented format. Extended experimental results validate the design decisions made and prove the usefulness of reducing the format complexity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a simple, public domain, HTML package for LP/CLP systems. The package allows generating HTML documents easily from LP/CLP systems, including HTML forms. It also provides facilities for parsing the input provided by HTML forms, as well as for creating standalone form handlers. The purpose of this document is to serve as a user's manual as well as a short description of the capabilities of the package. The package was originally developed for SICStus Prolog and the UPM &-Prolog/CIAO systems, but has been adapted to a number of popular LP/CLP systems. The document is also a WWW/HTML primer, containing sufficient information for developing medium complexity WWW applications in Prolog and other LP and CLP languages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method to analyze the influence of possible hysteresis cycles in devices employed for optical computing architectures is reported. A simple full adder structure is taken as the basis for this method. Single units, called optical programmable logic cells, previously reported by the authors, compose this structure. These cells employ, as basic devices, on-off and SEED-like components. Their hysteresis cycles have been modeled by numerical analysis. The influence of the different characteristic cycles is studied with respect to the obtained possible errors at the output. Two different approaches have been adopted. The first one shows the change in the arithmetic result output with respect to the different values and positions of the hysteresis cycle. The second one offers a similar result, but in a polar diagram where the total behavior of the system is better analyzed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We proposed an optical communications system, based on a digital chaotic signal where the synchronization of chaos was the main objective, in some previous papers. In this paper we will extend this work. A way to add the digital data signal to be transmitted onto the chaotic signal and its correct reception, is the main objective. We report some methods to study the main characteristics of the resulting signal. The main problem with any real system is the presence of some retard between the times than the signal is generated at the emitter at the time when this signal is received. Any system using chaotic signals as a method to encrypt need to have the same characteristics in emitter and receiver. It is because that, this control of time is needed. A method to control, in real time the chaotic signals, is reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We studied the coastal zone of the Tavoliere di Puglia plain, (Puglia region, southern Italy) with the aim to recognize the main unconformities, and therefore, the unconformity-bounded stratigraphic units (UBSUs; Salvador 1987, 1994) forming its Quaternary sedimentary fill. Recognizing unconformities is particularly problematic in an alluvial plain, due to the difficulties in distinguishing the unconformities that bound the UBSUs. So far, the recognition of UBSUs in buried successions has been made mostly by using seismic profiles. Instead, in our case, the unavailability of the latter has prompted us to address the problem by developing a methodological protocol consisting of the following steps: I) geological survey in the field; II) draft of a preliminary geological setting based on the field-survey results; III) dating of 102 samples coming from a large number of boreholes and some outcropping sections by means of the amino acid racemization (AAR) method applied to ostracod shells and 14C dating, filtering of the ages and the selection of valid ages; IV) correction of the preliminary geological setting in the light of the numerical ages; definition of the final geological setting with UBSUs; identification of a ‘‘hypothetical’’ or ‘‘attributed time range’’ (HTR or ATR) for each UBSU, the former very wide and subject to a subsequent modification, the latter definitive; V) cross-checking between the numerical ages and/or other characteristics of the sedimentary bodies and/or the sea-level curves (with their effects on the sedimentary processes) in order to restrict also the hypothetical time ranges in the attributed time ranges. The successful application of AAR geochronology to ostracod shells relies on the fact that the ability of ostracods to colonize almost all environments constitutes a tool for correlation, and also allow the inclusion in the same unit of coeval sediments that differ lithologically and paleoenvironmentally. The treatment of the numerical ages obtained using the AAR method required special attention. The first filtering step was made by the laboratory (rejection criteria a and b). Then, the second filtering step was made by testing in the field the remaining ages. Among these, in fact, we never compared an age with a single preceding and/or following age; instead, we identified homogeneous groups of numerical ages consistent with their reciprocal stratigraphic position. This operation led to the rejection of further numerical ages that deviate erratically from a larger, homogeneous age population which fits well with its stratigraphic position (rejection criterion c). After all of the filtering steps, the valid ages that remained were used for the subdivision of the sedimentary sequences into UBSUs together with the lithological and paleoenvironmental criteria. The numerical ages allowed us, in the first instance, to recognize all of the age gaps between two consecutive samples. Next, we identified the level, in the sedimentary thickness that is between these two samples, that may represent the most suitable UBSU boundary based on its lithology and/or the paleoenvironment. The recognized units are: I) Coppa Nevigata sands (NEA), HTR: MIS 20–14, ATR: MIS 17–16; II) Argille subappennine (ASP), HTR: MIS 15–11, ATR: MIS 15–13; III) Coppa Nevigata synthem (NVI), HTR: MIS 13–8, ATR: MIS 12–11; IV) Sabbie di Torre Quarto (STQ), HTR: MIS 13–9.1, ATR: MIS 11; V) Amendola subsynthem (MLM1), HTR: MIS 12–10, ATR: MIS 11; VI) Undifferentiated continental unit (UCI), HTR: MIS 11–6.2, ATR: MIS 9.3–7.1; VII) Foggia synthem (TGF), ATR: MIS 6; VIII) Masseria Finamondo synthem (TPF), ATR: Upper Pleistocene; IX) Carapelle and Cervaro streams synthem (RPL), subdivided into: IXa) Incoronata subsynthem (RPL1), HTR: MIS 6–3; ATR: MIS 5–3; IXb) Marane La Pidocchiosa–Castello subsynthem (RPL3), ATR: Holocene; X) Masseria Inacquata synthem (NAQ), ATR: Holocene. The possibility of recognizing and dating Quaternary units in an alluvial plain to the scale of a marine isotope stage constitutes a clear step forward compared with similar studies regarding other alluvial-plain areas, where Quaternary units were dated almost exclusively using their stratigraphic position. As a result, they were generically associated with a geological sub-epoch. Instead, our method allowed a higher detail in the timing of the sedimentary processes: for example, MIS 11 and MIS 5.5 deposits have been recognized and characterized for the first time in the study area, highlighting their importance as phases of sedimentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors. This article studies the implementation, for such accelerators, of the floating-point power function xy as defined by the C99 and IEEE 754-2008 standards, generalized here to arbitrary exponent and mantissa sizes. Last-bit accuracy at the smallest possible cost is obtained thanks to a careful study of the various subcomponents: a floating-point logarithm, a modified floating-point exponential, and a truncated floating-point multiplier. A parameterized architecture generator in the open-source FloPoCo project is presented in details and evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P, Z, N} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.