882 resultados para parallel processing systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Few studies have addressed the interaction between instruction content and saccadic eye movement control. To assess the impact of instructions on top-down control, we instructed 20 healthy volunteers to deliberately delay saccade triggering, to make inaccurate saccades or to redirect saccades--i.e. to glimpse towards and then immediately opposite to the target. Regular pro- and antisaccade tasks were used for comparison. Bottom-up visual input remained unchanged and was a gap paradigm for all instructions. In the inaccuracy and delay tasks, both latencies and accuracies were detrimentally impaired by either type of instruction and the variability of latency and accuracy was increased. The intersaccadic interval (ISI) required to correct erroneous antisaccades was shorter than the ISI for instructed direction changes in the redirection task. The word-by-word instruction content interferes with top-down saccade control. Top-down control is a time consuming process, which may override bottom-up processing only during a limited time period. It is questionable whether parallel processing is possible in top-down control, since the long ISI for instructed direction changes suggests sequential planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we propose an image acquisition and processing methodology (framework) developed for performance in-field grapes and leaves detection and quantification, based on a six step methodology: 1) image segmentation through Fuzzy C-Means with Gustafson Kessel (FCM-GK) clustering; 2) obtaining of FCM-GK outputs (centroids) for acting as seeding for K-Means clustering; 3) Identification of the clusters generated by K-Means using a Support Vector Machine (SVM) classifier. 4) Performance of morphological operations over the grapes and leaves clusters in order to fill holes and to eliminate small pixels clusters; 5)Creation of a mosaic image by Scale-Invariant Feature Transform (SIFT) in order to avoid overlapping between images; 6) Calculation of the areas of leaves and grapes and finding of the centroids in the grape bunches. Image data are collected using a colour camera fixed to a mobile platform. This platform was developed to give a stabilized surface to guarantee that the images were acquired parallel to de vineyard rows. In this way, the platform avoids the distortion of the images that lead to poor estimation of the areas. Our preliminary results are promissory, although they still have shown that it is necessary to implement a camera stabilization system to avoid undesired camera movements, and also a parallel processing procedure in order to speed up the mosaicking process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In current communication systems, there are many new challenges like various competitive standards, the scarcity of frequency resource, etc., especially the development of personal wireless communication systems result the new system update faster than ever before, the conventional hardware-based wireless communication system is difficult to adapt to this situation. The emergence of SDR enabled the third revolution of wireless communication which from hardware to software and build a flexible, reliable, upgradable, reusable, reconfigurable and low cost platform. The Universal Software Radio Peripheral (USRP) products are commonly used with the GNU Radio software suite to create complex SDR systems. GNU Radio is a toolkit where digital signal processing blocks are written in C++, and connected to each other with Python. This makes it easy to develop more sophisticated signal processing systems, because many blocks already written by others and you can quickly put them together to create a complete system. Although the main function of GNU Radio is not be a simulator, but if there is no RF hardware components,it supports to researching the signal processing algorithm based on pre-stored and generated data by signal generator. This thesis introduced SDR platform from hardware (USRP) and software(GNU Radio), as well as some basic modulation techniques in wireless communication system. Based on the examples provided by GNU Radio, carried out some related experiments, for example GSM scanning and FM radio station receiving on USRP. And make a certain degree of improvement based on the experience of some investigators to observe OFDM spectrum and simulate real-time video transmission. GNU Radio combine with USRP hardware proved to be a valuable lab platform for implementing complex radio system prototypes in a short time. RESUMEN. Software Defined Radio (SDR) es una tecnología emergente que está creando un impacto revolucionario en la tecnología de radio convencional. Un buen ejemplo de radio software son los sistemas de código abierto llamados GNU Radio que emplean un kit de herramientas de desarrollo de software libre. En este trabajo se ha empleado un kit de desarrollo comercial (Ettus Research) que consiste en un módulo de procesado de señal y un hardaware sencillo. El módulo emplea un software de desarrollo basado en Linux sobre el que se pueden implementar aplicaciones de radio software muy variadas. El hardware de desarrollo consta de un microprocesador de propósito general, un dispositivo programable (FPGA) y un interfaz de radiofrecuencia que cubre de 50 a 2200MHz. Este hardware se conecta al PC por medio de un interfaz USB de 8Mb/s de velocidad. Sobre la plataforma de Ettus se pueden ejecutar aplicaciones GNU radio que utilizan principalmente lenguaje de programación Python para implementarse. Sin embargo, su módulo de procesado de señal está construido en C + + y emplea un microprocesador con aritmética de coma flotante. Por lo tanto, los desarrolladores pueden rápida y fácilmente construir aplicaciones en tiempo real sistemas de comunicación inalámbrica de alta capacidad. Aunque su función principal no es ser un simulador, si no puesto que hay componentes de hardware RF, Radio GNU sirve de apoyo a la investigación del algoritmo de procesado de señales basado en pre-almacenados y generados por los datos del generador de señal. En este trabajo fin de máster se ha evaluado la plataforma de hardware de DEG (USRP) y el software (GNU Radio). Para ello se han empleado algunas técnicas de modulación básicas en el sistema de comunicación inalámbrica. A partir de los ejemplos proporcionados por GNU Radio, hemos realizado algunos experimentos relacionados, por ejemplo, escaneado del espectro, demodulación de señales de FM empleando siempre el hardware de USRP. Una vez evaluadas aplicaciones sencillas se ha pasado a realizar un cierto grado de mejora y optimización de aplicaciones complejas descritas en la literatura. Se han empleado aplicaciones como la que consiste en la generación de un espectro de OFDM y la simulación y transmisión de señales de vídeo en tiempo real. Con estos resultados se está ahora en disposición de abordar la elaboración de aplicaciones complejas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Como en todos los medios de transporte, la seguridad en los viajes en avión es de primordial importancia. Con los aumentos de tráfico aéreo previstos en Europa para la próxima década, es evidente que el riesgo de accidentes necesita ser evaluado y monitorizado cuidadosamente de forma continúa. La Tesis presente tiene como objetivo el desarrollo de un modelo de riesgo de colisión exhaustivo como método para evaluar el nivel de seguridad en ruta del espacio aéreo europeo, considerando todos los factores de influencia. La mayor limitación en el desarrollo de metodologías y herramientas de monitorización adecuadas para evaluar el nivel de seguridad en espacios de ruta europeos, donde los controladores aéreos monitorizan el tráfico aéreo mediante la vigilancia radar y proporcionan instrucciones tácticas a las aeronaves, reside en la estimación del riesgo operacional. Hoy en día, la estimación del riesgo operacional está basada normalmente en reportes de incidentes proporcionados por el proveedor de servicios de navegación aérea (ANSP). Esta Tesis propone un nuevo e innovador enfoque para evaluar el nivel de seguridad basado exclusivamente en el procesamiento y análisis trazas radar. La metodología propuesta ha sido diseñada para complementar la información recogida en las bases de datos de accidentes e incidentes, mediante la provisión de información robusta de los factores de tráfico aéreo y métricas de seguridad inferidas del análisis automático en profundidad de todos los eventos de proximidad. La metodología 3-D CRM se ha implementado en un prototipo desarrollado en MATLAB © para analizar automáticamente las trazas radar y planes de vuelo registrados por los Sistemas de Procesamiento de Datos Radar (RDP) e identificar y analizar todos los eventos de proximidad (conflictos, conflictos potenciales y colisiones potenciales) en un periodo de tiempo y volumen del espacio aéreo. Actualmente, el prototipo 3-D CRM está siendo adaptado e integrado en la herramienta de monitorización de prestaciones de Aena (PERSEO) para complementar las bases de accidentes e incidentes ATM y mejorar la monitorización y proporcionar evidencias de los niveles de seguridad. ABSTRACT As with all forms of transport, the safety of air travel is of paramount importance. With the projected increases in European air traffic in the next decade and beyond, it is clear that the risk of accidents needs to be assessed and carefully monitored on a continuing basis. The present thesis is aimed at the development of a comprehensive collision risk model as a method of assessing the European en-route risk, due to all causes and across all dimensions within the airspace. The major constraint in developing appropriate monitoring methodologies and tools to assess the level of safety in en-route airspaces where controllers monitor air traffic by means of radar surveillance and provide aircraft with tactical instructions lies in the estimation of the operational risk. The operational risk estimate normally relies on incident reports provided by the air navigation service providers (ANSPs). This thesis proposes a new and innovative approach to assessing aircraft safety level based exclusively upon the process and analysis of radar tracks. The proposed methodology has been designed to complement the information collected in the accident and incident databases, thereby providing robust information on air traffic factors and safety metrics inferred from the in depth assessment of proximate events. The 3-D CRM methodology is implemented in a prototype tool in MATLAB © in order to automatically analyze recorded aircraft tracks and flight plan data from the Radar Data Processing systems (RDP) and identify and analyze all proximate events (conflicts, potential conflicts and potential collisions) within a time span and a given volume of airspace. Currently, the 3D-CRM prototype is been adapted and integrated in AENA’S Performance Monitoring Tool (PERSEO) to complement the information provided by the ATM accident and incident databases and to enhance monitoring and providing evidence of levels of safety.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis tiene por objeto estudiar las posibilidades de realizar en castellano tareas relativas a la resolución de problemas con sistemas basados en el conocimiento. En los dos primeros capítulos se plantea un análisis de la trayectoria seguida por las técnicas de tratamiento del lenguaje natural, prestando especial interés a los formalismos lógicos para la comprensión del lenguaje. Seguidamente, se plantea una valoración de la situación actual de los sistemas de tratamiento del lenguaje natural. Finalmente, se presenta lo que constituye el núcleo de este trabajo, un sistema llamado Sirena, que permite realizar tareas de adquisición, comprensión, recuperación y explicación de conocimiento en castellano con sistemas basados en el conocimiento. Este sistema contiene un subconjunto del castellano amplio pero simple formalizado con una gramática lógica. El significado del conocimiento se basa en la lógica y ha sido implementado en el lenguaje de programación lógica Prolog II vS. Palabras clave: Programación Lógica, Comprensión del Lenguaje Natural, Resolución de Problemas, Gramáticas Lógicas, Lingüistica Computacional, Inteligencia Artificial.---ABSTRACT---The purpose of this thesis is to study the possibi1 ities of performing in Spanish problem solving tasks with knowledge based systems. Ule study the development of the techniques for natural language processing with a particular interest in the logical formalisms that have been used to understand natural languages. Then, we present an evaluation of the current state of art in the field of natural language processing systems. Finally, we introduce the main contribution of our work, Sirena a system that allows the adquisition, understanding, retrieval and explanation of knowledge in Spanish with knowledge based systems. Sirena can deal with a large, although simple» subset of Spanish. This subset has been formalised by means of a logic grammar and the meaning of knowledge is based on logic. Sirena has been implemented in the programming language Prolog II v2. Keywords: Logic Programming, Understanding Natural Language, Problem Solving, Logic Grammars, Cumputational Linguistic, Artificial Intelligence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reproducible research in scientific workflows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and final results, improves understanding, and permits replaying a workflow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We define a process for documenting the workflow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation using a real workflow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predefined virtual machine image on both computing platforms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis presenta un modelo, una metodología, una arquitectura, varios algoritmos y programas para crear un lexicón de sentimientos unificado (LSU) que cubre cuatro lenguas: inglés, español, portugués y chino. El objetivo principal es alinear, unificar, y expandir el conjunto de lexicones de sentimientos disponibles en Internet y los desarrollados a lo largo de esta investigación. Así, el principal problema a resolver es la tarea de unificar de forma automatizada los diferentes lexicones de sentimientos obtenidos por el crawler CSR, porque la unidad de medida para asignar la intensidad de los valores de la polaridad (de forma manual, semiautomática y automática) varía de acuerdo con las diferentes metodologías utilizadas para la construcción de cada lexicón. La representación codificada de la estructura de datos de los términos presenta también una variación en la estructura de lexicón a lexicón. Por lo que al unificar en un lexicón de sentimientos se hace posible la reutilización del conocimiento recopilado por los diferentes grupos de investigación y se incrementa, a la vez, el alcance, la calidad y la robustez de los lexicones. Nuestra metodología LSU calcula un valor unificado de la intensidad de la polaridad para cada entrada léxica que está presente en al menos dos de los lexicones de sentimientos que forman parte de este estudio. En contraste, las entradas léxicas que no son comunes en al menos dos de los lexicones conservan su valor original. El coeficiente de Pearson resultante permite medir la correlación existente entre las entradas léxicas asignándoles un rango de valores de uno a menos uno, donde uno indica que los valores de los términos están perfectamente correlacionados, cero indica que no existe correlación y menos uno significa que están inversamente correlacionados. Este procedimiento se lleva acabo con la función de MetricasUnificadas tanto en la CPU como en la GPU. Otro problema a resolver es el tiempo de procesamiento que se requiere para realizar la tarea de unificación de la intensidad de la polaridad y con ello alcanzar una cobertura mayor de lemas en los lexicones de sentimientos existentes. Asimismo, la metodología LSU utiliza el procesamiento paralelo para unificar los 155 802 términos. El algoritmo LSU procesa mediante cargas iguales el subconjunto de entradas léxicas en cada uno de los 1344 núcleos en la GPU. Los resultados de nuestro análisis arrojaron un total de 95 430 entradas léxicas donde 35 201 obtuvieron valores positivos, 22 029 negativos y 38 200 neutrales. Finalmente, el tiempo de ejecución fue de 2,506 segundos para el total de las entradas léxicas, lo que permitió reducir el procesamiento de cómputo hasta en una tercera parte con respecto al algoritmo secuencial. De estos resultados se concluye que al lograr un lexicón de sentimientos unificado que permite homogeneizar la intensidad de la polaridad de las unidades léxicas (con valores positivos, negativos y neutrales) deriva no sólo en el análisis semántico del corpus basado en los términos con una mayor carga de polaridad, o del resumen de las valoraciones o las tendencias de neuromarketing, sino también en aplicaciones como el etiquetado subjetivo de sitios web o de portales sintácticos y semánticos, por mencionar algunas. ABSTRACT This thesis presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral P, N, Z depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and - 1 , where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155,802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95,430 lexical entries, out of which there are 35,201 considered to be positive, 22,029 negative, and 38,200 neutral. Finally, the runtime was 2.505 seconds for 95,430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times with respect to the sequential implementation. A key contribution of this work is that we preserve the use of a unified sentiment lexicon for all tasks. Such lexicon is used to define resources and resource-related properties that can be verified based on the results of the analysis and is powerful, general and extensible enough to express a large class of interesting properties. Some applications of this work include merging, aligning, pruning and extending the current sentiment lexicons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A genetic annealing model for the universal ancestor of all extant life is presented; the name of the model derives from its resemblance to physical annealing. The scenario pictured starts when “genetic temperatures” were very high, cellular entities (progenotes) were very simple, and information processing systems were inaccurate. Initially, both mutation rate and lateral gene transfer levels were elevated. The latter was pandemic and pervasive to the extent that it, not vertical inheritance, defined the evolutionary dynamic. As increasingly complex and precise biological structures and processes evolved, both the mutation rate and the scope and level of lateral gene transfer, i.e., evolutionary temperature, dropped, and the evolutionary dynamic gradually became that characteristic of modern cells. The various subsystems of the cell “crystallized,” i.e., became refractory to lateral gene transfer, at different stages of “cooling,” with the translation apparatus probably crystallizing first. Organismal lineages, and so organisms as we know them, did not exist at these early stages. The universal phylogenetic tree, therefore, is not an organismal tree at its base but gradually becomes one as its peripheral branchings emerge. The universal ancestor is not a discrete entity. It is, rather, a diverse community of cells that survives and evolves as a biological unit. This communal ancestor has a physical history but not a genealogical one. Over time, this ancestor refined into a smaller number of increasingly complex cell types with the ancestors of the three primary groupings of organisms arising as a result.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the mammalian retina, extensive processing of spatiotemporal and chromatic information occurs. One key principle in signal transfer through the retina is parallel processing. Two of these parallel pathways are the ON- and OFF-channels transmitting light and dark signals. This dual system is created in the outer plexiform layer, the first relay station in retinal signal transfer. Photoreceptors release glutamate onto ON- and OFF-type bipolar cells, which are functionally distinguished by their postsynaptic expression of different types of glutamate receptors, namely ionotropic and metabotropic glutamate receptors. In the current concept, rod photoreceptors connect only to rod bipolar cells (ON-type) and cone photoreceptors connect only to cone bipolar cells (ON- and OFF-type). We have studied the distribution of (RS)-α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) glutamate receptor subunits at the synapses in the outer plexiform layer of the rodent retina by immunoelectron microscopy and serial section reconstruction. We report a non-classical synaptic contact and an alternative pathway for rod signals in the retina. Rod photoreceptors made synaptic contact with putative OFF-cone bipolar cells that expressed the AMPA glutamate receptor subunits GluR1 and GluR2 on their dendrites. Thus, in the retina of mouse and rat, an alternative pathway for rod signals exists, where rod photoreceptors bypass the rod bipolar cell and directly excite OFF-cone bipolar cells through an ionotropic sign-conserving AMPA glutamate receptor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo general de este proyecto se centra en el estudio, desarrollo y experimentación de diferentes técnicas y sistemas basados en Tecnologías del Lenguaje Humano (TLH) para el desarrollo de la próxima generación de sistemas de procesamiento inteligente de la información digital (modelado, recuperación, tratamiento, comprensión y descubrimiento) afrontando los actuales retos de la comunicación digital. En este nuevo escenario, los sistemas deben incorporar capacidades de razonamiento que descubrirán la subjetividad de la información en todos sus contextos (espacial, temporal y emocional) analizando las diferentes dimensiones de uso (multilingualidad, multimodalidad y registro).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The exponential increase of subjective, user-generated content since the birth of the Social Web, has led to the necessity of developing automatic text processing systems able to extract, process and present relevant knowledge. In this paper, we tackle the Opinion Retrieval, Mining and Summarization task, by proposing a unified framework, composed of three crucial components (information retrieval, opinion mining and text summarization) that allow the retrieval, classification and summarization of subjective information. An extensive analysis is conducted, where different configurations of the framework are suggested and analyzed, in order to determine which is the best one, and under which conditions. The evaluation carried out and the results obtained show the appropriateness of the individual components, as well as the framework as a whole. By achieving an improvement over 10% compared to the state-of-the-art approaches in the context of blogs, we can conclude that subjective text can be efficiently dealt with by means of our proposed framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (M.S.) - University of Illinois at Urbana-Champaign.