978 resultados para open video tools


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transmission errors are the main cause of degradation of the quality of real broadcasted video services. Therefore, knowing their impact on the quality of experience of the end users is a crucial issue. For instance, it would help to improve the performance of the distribution systems, and to develop monitoring tools to automatically estimate the quality perceived by the end users. In this paper we validate a subjective evaluation approach specifically designed to obtain meaningful results of the effects of degradations caused by transmission errors. This methodology has been already used in our previous works with monoscopic and stereoscopic videos. The validation is done by comparing the subjective ratings obtained for typical transmission errors with the proposed methodology and with the standard method Absolute Category Rating. The results show that the proposed approach could provide more representative evaluations of the quality of experience perceived by end users of conventional and 3D broadcasted video services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing use of video editing software requires faster and more efficient editing tools. As a first step, these tools perform a temporal segmentation in shots that allows a later building of indexes describing the video content. Here, we propose a novel real-time high-quality shot detection strategy, suitable for the last generation of video editing software requiring both low computational cost and high quality results. While abrupt transitions are detected through a very fast pixel-based analysis, gradual transitions are obtained from an efficient edge-based analysis. Both analyses are reinforced with a motion analysis that helps to detect and discard false detections. This motion analysis is carried out exclusively over a reduced set of candidate transitions, thus maintaining the computational requirements demanded by new applications to fulfill user needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta Tesis se plantea una nueva forma de entender la evacuación apoyándonos en tecnologías existentes y accesibles que nos permitirán ver este proceso como un ente dinámico. Se trata de una metodología que implica no solo el uso de herramientas de análisis que permitan la definición de planes de evacuación en tiempo real, sino que también se apunta hacia la creación de una infraestructura física que permita alimentar con información actualizada al sistema de forma que, según la situación y la evolución de la emergencia, sea posible realizar planes alternativos que se adapten a las nuevas circunstancias. En base a esto, el sistema asimilará toda esa información y aportará soluciones que faciliten la toma de decisiones durante toda la evolución del incidente. Las aportaciones originales de esta Tesis son múltiples y muy variadas, pudiéndolas resumir en los siguientes puntos: 1. Estudio completo del estado del arte: a. Detección y análisis de diferentes proyectos a nivel internacional que de forma parcial tratan algunos aspectos desarrollados en la Tesis. b. Completo estudio a nivel mundial del software desarrollado total o parcialmente para la simulación del comportamiento humano y análisis de procesos de evacuación. Se ha generado una base de datos que cataloga de forma exhaustiva estas aplicaciones, permitiendo realizar un completo análisis y posibilitando la evolución futura de los contenidos de la misma. En la tesis se han analizado casi un centenar de desarrollos, pero el objetivo es seguir completando esta base de datos debido a la gran utilidad y a las importantes posibilidades que ofrece. 2. Desarrollo de un importante capítulo que trata sobre la posibilidad de utilizar entornos virtuales como alternativa intermedia al uso de simuladores y simulacros. En esta sección se divide en dos bloques: a. Ensayos en entornos reales y virtuales. b. Ensayos en entornos virtuales (pruebas realizadas con varios entornos virtuales). 3. Desarrollo de e-Flow net design: paquete de herramientas desarrolladas sobre Rhinoceros para el diseño de la red de evacuación basada en los elementos definidos en la tesis: Nodes, paths, Relations y Areas. 4. Desarrollo de e-Flow Simulator: Conjunto de herramientas que transforman Rhinoceros en un simulador 3D de comportamiento humano. Este simulador, de desarrollo propio, incorpora un novedoso algoritmo de comportamiento a nivel de individuo que incluye aspectos que no se han encontrado en otros simuladores. Esta herramienta permite realizar simulaciones programadas de grupos de individuos cuyo comportamiento se basa en el análisis del entorno y en la presencia de referencias dinámicas. Incluye otras importantes novedades como por ejemplo: herramientas para análisis de la señalización, elementos de señalización dinámica, incorporación sencilla de obstáculos, etc. También se ha creado una herramienta que posibilita la implementación del movimiento del propio escenario simulando la oscilación del mismo, con objeto de reflejar la influencia del movimiento del buque en el desplazamiento de los individuos. 5. En una fase avanzada del desarrollo, se incorporó la posibilidad de generar un vídeo de toda la simulación, momento a partir del cual, se han documentado todas las pruebas (y se continúan documentando) en una base de datos que recoge todas las características de las simulaciones, los problemas detectados, etc. Estas pruebas constituyen, en el momento en que se ha cerrado la redacción de la Tesis, un total de 81 GB de datos. Generación y análisis de rutas en base a la red de evacuación creada con e-Flow Net design y las simulaciones realizadas con e-Flow Net simulator. a. Análisis para la optimización de la configuración de la red en base a los nodos por área existentes. b. Definición de procesos previos al cálculo de rutas posibles. c. Cálculo de rutas: i. Análisis de los diferentes algoritmos que existen en la actualidad para la optimización de rutas. ii. Desarrollo de una nueva familia de algoritmos que he denominado “Minimum Decision Algorithm (MDA)”, siendo los algoritmos que componen esta familia: 1. MDA básico. 2. MDA mínimo. 3. MDA de no interferencia espacial. 4. MDA de expansión. 5. MDA de expansión ordenada para un único origen. 6. MDA de expansión ordenada. iii. Todos estos algoritmos se han implementado en la aplicación e-Flow creada en la Tesis para el análisis de rutas y que constituye el núcleo del Sistema de Ayuda al Capitán. d. Determinación de las alternativas para el plan de evacuación. Tras la definición de las rutas posibles, se describen diferentes procesos existentes de análisis por ponderación en base a criterios, para pasar finalmente a definir el método de desarrollo propio propuesto en esta Tesis y cuyo objetivo es responder en base a la población de rutas posibles obtenidas mediante los algoritmos MDA, qué combinación de rutas constituyen el Plan o Planes más adecuados para cada situación. La metodología creada para la selección de combinaciones de rutas que determinan un Plan completo, se basa en cuatro criterios básicos que tras su aplicación ofrecen las mejores alternativas. En esta fase también se incluye un complejo análisis de evolución temporal que incorpora novedosas definiciones y formulaciones. e. Derivado de la definición de la metodología creada en esta Tesis para la realización de los análisis de evolución temporal, se ha podido definir un nuevo teorema matemático que se ha bautizado como “Familia de cuadriláteros de área constante”. 7. Especificación de la infraestructura física del Sistema de Ayuda al Capitán: parte fundamental de sistema es la infraestructura física sobre la que se sustentaría. Esta infraestructura estaría compuesta por sensores, actuadores, aplicaciones para dispositivos móviles, etc. En este capítulo se analizan los diferentes elementos que la constituirían y las tecnologías implicadas. 8. Especificación de la infraestructura de servicios. 9. Creación del Blog Virtual Environments (http://epcinnova-virtualenvironments.blogspot.com.es/) en el que se han publicado todas las pruebas realizadas en el capítulo que analiza los entornos virtuales como alternativa a los simuladores y a los ensayos en laboratorio o los simulacros, incluyendo en muchos casos la posibilidad de que el visitante del blog pueda realizar la simulación en el entorno virtual. Este blog también incluye otras secciones que se han trabajado durante la Tesis: • Recopilación de diferentes entornos virtuales existentes. • Diagrama que recopila información sobre accidentes tanto en el ámbito marítimo como en el terrestre (en desarrollo). • Esquema propuesto para el acopio de información obtenida a partir de un simulacro. 10. Esta Tesis es la base para el proyecto e-Flow (nombre de una de las aplicaciones que desarrolladas en esta obra), un proyecto en el que el autor de esta Tesis ha trabajado como Project Manager. En el proyecto participa un consorcio de empresas y la UPM, y tiene como objetivo trasladar a la realidad gran parte de los planteamientos e ideas presentadas en esta Tesis. Este proyecto incluye el desarrollo de la infraestructura física y de servicios que permitirán, entre otras cosas, implementar en infraestructuras complejas una plataforma que posibilita la evacuación dinámica y un control ubicuo de los sistemas de monitorización y actuación implementados. En estos momentos se está finalizando el proyecto, cuyo objetivo final es la implementación de un piloto en un Hospital. También destacamos los siguientes avances a nivel de difusión científico-tecnológico: • Ponencia en el “52 congreso de la Ingeniería Naval en España” presentando un artículo “e-Flow- Sistema integral inteligente de soporte a la evacuación”. En este artículo se trata tanto el proyecto e-Flow del que soy Project Manager, como esta Tesis Doctoral, al ser temas estrechamente vinculados. En 2014 se publicó en dos números de la Revista Ingeniería Naval el artículo presentado a estas jornadas. • Co-autor en el artículo “E-Flow: A communication system for user notification in dynamic evacuation scenarios” presentado en el 7th International Conference on Ubicuous Computing & Ambient Intelligence (UCAMI) celebrado en Costa Rica. Por último, una de las aportaciones más interesantes, es la definición de un gran número de líneas de investigación futuras en base a todos los avances realizados en esta Tesis. ABSTRACT With this Thesis a new approach for understanding evacuation process is considered, taking advantage of the existing and open technologies that will allow this process to be interpreted as a dynamic entity. The methodology involves not only tools that allows on.-time evacuation plans, but also creates a physical insfrastructure that makes possible to feed the system with information on real time so, considering in each moment the real situation as well as the specific emergency development it will be feasible to generate alternative plans that responds to the current emergency situation. In this respect, the system will store all this information and will feedback with solutions that will help the decision making along the evacuation process. The innovative and singular contributions of this Thesis are numerous and rich, summarised as follows: 1.- Complete state-of-art study: a. Detection and analysis of different projects on an international level that, although partially, deal with some aspects developed in this Thesis. b. Thorough study at a international level of the developed software - total or partially done - for the simulation of the human behaviour and evacuation processes analysis. A database has been generated that classifies in detail these applications allowing to perform a full analysis and leading to future evolution of its contents. Within the Thesis work, almost a hundred of developments have been analysed but the purpose is to keep up updating this database due to the broad applications and possibilities that it involves. 2. Development of an important chapter that studies the possibility of using virtual scenarios as mid-term alternative for the use of simulations. This section is divided in two blocks: a. Trials in virtual and real scenarios b. Trials in virutal scenarios (trials performed with several ones). 3. E-Flow net design development: Set of tools developed under Rhinoceros for the evacuation net design based on the elements defined in the Thesis: Nodes, Paths, Relations, Areas 4. E-Flow simulator development: Set of tools that uses Rhinoceros as a 3D simulator of human behaviour. This simulator, of my own design, includes a new and original algorithm of human behaviour that involves aspects that are not found in other simulators. This tool allows to perform groups programmed simulations which behaviour is based on their enviroment analysis and presence of dynamic references. It includes other important innovations as for example: tools for signals analysis, dynamic signal elements, easy obstacle adding etc... More over, a tool that allows the own scenario movement implementation has been created by simulating the own oscillation movement, with the purpose of playing the vessel movement's influences in the individuals' displacements. 5. In an advanced stage of the development, the possibility of generating a video recording of all the simulation was also integrated, then from that moment all tests have been filed (and keep on doing so) in a database that collects all simulation characteristics, failures detected, etc. These stored tests amounts to a total of 81 GB at the moment of finishing the Thesis work. Generation and analysis of paths regarding the evacuation net created with E-Flow design and the simulations performed with E-Flow net Simulator. a. Analysis for the optimisation of the network configuration based in the existing nodes per area. b. Definition of the processes previous to the calculation of the feasible paths c. Paths calculation: i. Analysis of the different algorithms on existance nowadays for the routes optimisation. ii. Development of a new family of algorithms that I have called “Minimum Decision Algorithm (MDA)”, being composed of: 1. MDA basic 2. MDA minimum 3. MDA of not spacial interference 4. MDA of expansion (es de extenderse) o enlargement ( es de crecimiento) 5. MDA of organised expansion for a single origin (of organised enlargement for a single origin) 6. MDA of organised expansion (of organised enlargement) iii. All these algorithms have been implemented in the E-Flow application created in the Thesis dfor the routes analysis and it is the core of the Captain's support system. d. Determination of the alternatives for the evacuation plan. After defining all possible paths, different processes of analysis existing for weighing-based criteria are described, thus to end defining the own development method proposed in this Thesis and that aims to respond in an agreggation of possible routes basis obtained by means of the MDA algorithms what is the routes' combination more suitable for the Plan or Plans in every situation. The methodology created fot the selection of the combinations of routes that determine a complete Plan is baesd in four basic criteria that after applying, offer the best alternatives. In this stage a complex analysis of the progress along time is also included, that adds original and innovative defintions and formulations. e. Originated from the methodology created in this Thesis for the perfoming of the analysy of the progress along time, a new mathematic theorem has been defined, that has been called as "Family of quadrilateral of constant area". 7. Specification of the physiscal infrastructure of the Captain's help system: essential part is this physical infrastructure that will support it. This system will be made of sensors, actuators, apps for mobile devices etc... Within this chapter the different elements and technologies that make up this infrastructure will be studied. 8. Specification for the services infrastructure. 9. Start up of the Blog. " Virtual Environments (http://epcinnova-virtualenvironments.blogspot.com.es/)" in which all tests performed have been published in terms of analysis of the virtual enviroments as alternative to the simulators as well as to the laboratory experiments or simulations, including in most of the cases the possibility that the visitor can perform the simulation within the virtual enviroment. This blog also includes other sections that have been worked along and within this Thesis: - Collection of different virtual scenarios existent. - Schema that gathers information concerning accidents for maritime and terrestrial areas (under development) - Schema proposed for the collecting of information obtained from a simulation. 10. This Thesis is the basis of the E-Flow project (name of one of the applications developed in this work), a project in which the Thesis' author has worked in as Project Manager. In the project takes part a consortium of firms as well as the UPM and the aim is to bring to real life most part of the approaches and ideas contained in this Thesis. This project includes the development of the physical infrastructure as well as the services that will allow, among others, implement in complex infrastrucutres a platform that will make possible a dynamic evacuation and a continuous control of the monitoring and acting systems implemented. At the moment the project is getting to an end which goal is the implementation of a pilot project in a Hospital. We also would like to highlight the following advances concerning the scientific-technology divulgation: • Talk in the " 52th Congress of the Naval Engineering in Spain" with the article "E-Flow . Intelligent system integrated for supporting evacuation". This paper is about project E-Flow which I am Project Manager of, as well as this Thesis for the Doctorate, being both closely related. Two papers published In 2014 in the Naval Engineering Magazine. • Co-author in the article “E-Flow: A communication system for user notification in dynamic evacuation scenarios” [17] introduced in the 7th International Conference on Ubicuous Computing & Ambient Intelligence (UCAMI) held in Costa Rica. Last, but not least, one of the more interesting contributions is the defintion of several lines of research in the future, based on the advances made in this Thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systematic evaluation of Learning Objects is essential to make high quality Web-based education possible. For this reason, several educational repositories and e-Learning systems have developed their own evaluation models and tools. However, the differences of the context in which Learning Objects are produced and consumed suggest that no single evaluation model is sufficient for all scenarios. Besides, no much effort has been put in developing open tools to facilitate Learning Object evaluation and use the quality information for the benefit of end users. This paper presents LOEP, an open source web platform that aims to facilitate Learning Object evaluation in different scenarios and educational settings by supporting and integrating several evaluation models and quality metrics. The work exposed in this paper shows that LOEP is capable of providing Learning Object evaluation to e-Learning systems in an open, low cost, reliable and effective way. Possible scenarios where LOEP could be used to implement quality control policies and to enhance search engines are also described. Finally, we report the results of a survey conducted among reviewers that used LOEP, showing that they perceived LOEP as a powerful and easy to use tool for evaluating Learning Objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shading reduces the power output of a photovoltaic (PV) system. The design engineering of PV systems requires modeling and evaluating shading losses. Some PV systems are affected by complex shading scenes whose resulting PV energy losses are very difficult to evaluate with current modeling tools. Several specialized PV design and simulation software include the possibility to evaluate shading losses. They generally possess a Graphical User Interface (GUI) through which the user can draw a 3D shading scene, and then evaluate its corresponding PV energy losses. The complexity of the objects that these tools can handle is relatively limited. We have created a software solution, 3DPV, which allows evaluating the energy losses induced by complex 3D scenes on PV generators. The 3D objects can be imported from specialized 3D modeling software or from a 3D object library. The shadows cast by this 3D scene on the PV generator are then directly evaluated from the Graphics Processing Unit (GPU). Thanks to the recent development of GPUs for the video game industry, the shadows can be evaluated with a very high spatial resolution that reaches well beyond the PV cell level, in very short calculation times. A PV simulation model then translates the geometrical shading into PV energy output losses. 3DPV has been implemented using WebGL, which allows it to run directly from a Web browser, without requiring any local installation from the user. This also allows taken full benefits from the information already available from Internet, such as the 3D object libraries. This contribution describes, step by step, the method that allows 3DPV to evaluate the PV energy losses caused by complex shading. We then illustrate the results of this methodology to several application cases that are encountered in the world of PV systems design. Keywords: 3D, modeling, simulation, GPU, shading, losses, shadow mapping, solar, photovoltaic, PV, WebGL

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-­‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este proyecto fin de grado presenta dos herramientas, Papify y Papify-Viewer, para medir y visualizar, respectivamente, las prestaciones a bajo nivel de especificaciones RVC-CAL basándose en eventos hardware. RVC-CAL es un lenguaje de flujo de datos estandarizado por MPEG y utilizado para definir herramientas relacionadas con la codificación de vídeo. La estructura de los programas descritos en RVC-CAL se basa en unidades funcionales llamadas actores, que a su vez se subdividen en funciones o procedimientos llamados acciones. ORCC (Open RVC-CAL Compiler) es un compilador de código abierto que utiliza como entrada descripciones RVC-CAL y genera a partir de ellas código fuente en un lenguaje dado, como por ejemplo C. Internamente, el compilador ORCC se divide en tres etapas distinguibles: front-end, middle-end y back-end. La implementación de Papify consiste en modificar la etapa del back-end del compilador, encargada de la generación de código, de modo tal que los actores, al ser traducidos a lenguaje C, queden instrumentados con PAPI (Performance Application Programing Interface), una herramienta utilizada como interfaz a los registros contadores de rendimiento (PMC) de los procesadores. Además, también se modifica el front-end para permitir identificar cierto tipo de anotaciones en las descripciones RVC-CAL, utilizadas para que el diseñador pueda indicar qué actores o acciones en particular se desean analizar. Los actores instrumentados, además de conservar su funcionalidad original, generan una serie de ficheros que contienen datos sobre los distintos eventos hardware que suceden a lo largo de su ejecución. Los eventos incluidos en estos ficheros son configurables dentro de las anotaciones previamente mencionadas. La segunda herramienta, Papify-Viewer, utiliza los datos generados por Papify y los procesa, obteniendo una representación visual de la información a dos niveles: por un lado, representa cronológicamente la ejecución de la aplicación, distinguiendo cada uno de los actores a lo largo de la misma. Por otro lado, genera estadísticas sobre la cantidad de eventos disparados por acción, actor o núcleo de ejecución y las representa mediante gráficos de barra. Ambas herramientas pueden ser utilizadas en conjunto para verificar el funcionamiento del programa, balancear la carga de los actores o la distribución por núcleos de los mismos, mejorar el rendimiento y diagnosticar problemas. ABSTRACT. This diploma project presents two tools, Papify and Papify-Viewer, used to measure and visualize the low level performance of RVC-CAL specifications based on hardware events. RVC-CAL is a dataflow language standardized by MPEG which is used to define video codec tools. The structure of the applications described in RVC-CAL is based on functional units called actors, which are in turn divided into smaller procedures called actions. ORCC (Open RVC-CAL Compiler) is an open-source compiler capable of transforming RVC-CAL descriptions into source code in a given language, such as C. Internally, the compiler is divided into three distinguishable stages: front-end, middle-end and back-end. Papify’s implementation consists of modifying the compiler’s back-end stage, which is responsible for generating the final source code, so that translated actors in C code are now instrumented with PAPI (Performance Application Programming Interface), a tool that provides an interface to the microprocessor’s performance monitoring counters (PMC). In addition, the front-end is also modified in such a way that allows identification of a certain type of annotations in the RVC-CAL descriptions, allowing the designer to set the actors or actions to be included in the measurement. Besides preserving their initial behavior, the instrumented actors will also generate a set of files containing data about the different events triggered throughout the program’s execution. The events included in these files can be configured inside the previously mentioned annotations. The second tool, Papify-Viewer, makes use of the files generated by Papify to process them and provide a visual representation of the information in two different ways: on one hand, a chronological representation of the application’s execution where each actor has its own timeline. On the other hand, statistical information is generated about the amount of triggered events per action, actor or core. Both tools can be used together to assert the normal functioning of the program, balance the load between actors or cores, improve performance and identify problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este Proyecto Fin de Grado se ha realizado un estudio de cómo generar, a partir de modelos de flujo de datos en RVC-CAL (Reconfigurable Video Coding – CAL Actor Language), modelos VHDL (Versatile Hardware Description Language) mediante Vivado HLS (Vivado High Level Synthesis), incluida en las herramientas disponibles en Vivado de Xilinx. Una vez conseguido el modelo VHDL resultante, la intención es que mediante las herramientas de Xilinx se programe en una FPGA (Field Programmable Gate Array) o el dispositivo Zynq también desarrollado por Xilinx. RVC-CAL es un lenguaje de flujo de datos que describe la funcionalidad de bloques funcionales, denominados actores. Las funcionalidades que desarrolla un actor se definen como acciones, las cuales pueden ser diferentes en un mismo actor. Los actores pueden comunicarse entre sí y formar una red de actores o network. Con Vivado HLS podemos obtener un diseño VHDL a partir de un modelo en lenguaje C. Por lo que la generación de modelos en VHDL a partir de otros en RVC-CAL, requiere una fase previa en la que los modelos en RVC-CAL serán compilados para conseguir su equivalente en lenguaje C. El compilador ORCC (Open RVC-CAL Compiler) es la herramienta que nos permite lograr diseños en lenguaje C partiendo de modelos en RVC-CAL. ORCC no crea directamente el código ejecutable, sino que genera un código fuente disponible para ser compilado por otra herramienta, en el caso de este proyecto, el compilador GCC (Gnu C Compiler) de Linux. En resumen en este proyecto nos encontramos con tres puntos de estudio bien diferenciados, los cuales son: 1. Partimos de modelos de flujo de datos en RVC-CAL, los cuales son compilados por ORCC para alcanzar su traducción en lenguaje C. 2. Una vez conseguidos los diseños equivalentes en lenguaje C, son sintetizados en Vivado HLS para conseguir los modelos en VHDL. 3. Los modelos VHDL resultantes serian manipulados por las herramientas de Xilinx para producir el bitstream que sea programado en una FPGA o en el dispositivo Zynq. En el estudio del segundo punto, nos encontramos con una serie de elementos conflictivos que afectan a la síntesis en Vivado HLS de los diseños en lenguaje C generados por ORCC. Estos elementos están relacionados con la manera que se encuentra estructurada la especificación en C generada por ORCC y que Vivado HLS no puede soportar en determinados momentos de la síntesis. De esta manera se ha propuesto una transformación “manual” de los diseños generados por ORCC que afecto lo menos posible a los modelos originales para poder realizar la síntesis con Vivado HLS y crear el fichero VHDL correcto. De esta forma este documento se estructura siguiendo el modelo de un trabajo de investigación. En primer lugar, se exponen las motivaciones y objetivos que apoyan y se esperan lograr en este trabajo. Seguidamente, se pone de manifiesto un análisis del estado del arte de los elementos necesarios para el desarrollo del mismo, proporcionando los conceptos básicos para la correcta comprensión y estudio del documento. Se realiza una descripción de los lenguajes RVC-CAL y VHDL, además de una introducción de las herramientas ORCC y Vivado, analizando las bondades y características principales de ambas. Una vez conocido el comportamiento de ambas herramientas, se describen las soluciones desarrolladas en nuestro estudio de la síntesis de modelos en RVC-CAL, poniéndose de manifiesto los puntos conflictivos anteriormente señalados que Vivado HLS no puede soportar en la síntesis de los diseños en lenguaje C generados por el compilador ORCC. A continuación se presentan las soluciones propuestas a estos errores acontecidos durante la síntesis, con las cuales se pretende alcanzar una especificación en C más óptima para una correcta síntesis en Vivado HLS y alcanzar de esta forma los modelos VHDL adecuados. Por último, como resultado final de este trabajo se extraen un conjunto de conclusiones sobre todos los análisis y desarrollos acontecidos en el mismo. Al mismo tiempo se proponen una serie de líneas futuras de trabajo con las que se podría continuar el estudio y completar la investigación desarrollada en este documento. ABSTRACT. In this Project it has made a study of how to generate, from data flow models in RVC-CAL (Reconfigurable Video Coding - Actor CAL Language), VHDL models (Versatile Hardware Description Language) by Vivado HLS (Vivado High Level Synthesis), included in the tools available in Vivado of Xilinx. Once achieved the resulting VHDL model, the intention is that by the Xilinx tools programmed in FPGA or Zynq device also developed by Xilinx. RVC-CAL is a dataflow language that describes the functionality of functional blocks, called actors. The functionalities developed by an actor are defined as actions, which may be different in the same actor. Actors can communicate with each other and form a network of actors. With Vivado HLS we can get a VHDL design from a model in C. So the generation of models in VHDL from others in RVC-CAL requires a preliminary phase in which the models RVC-CAL will be compiled to get its equivalent in C. The compiler ORCC (Open RVC-CAL Compiler) is the tool that allows us to achieve designs in C language models based on RVC-CAL. ORCC not directly create the executable code but generates an available source code to be compiled by another tool, in the case of this project, the GCC compiler (GNU C Compiler) of Linux. In short, in this project we find three well-defined points of study, which are: 1. We start from data flow models in RVC-CAL, which are compiled by ORCC to achieve its translation in C. 2. Once you realize the equivalent designs in C, they are synthesized in Vivado HLS for VHDL models. 3. The resulting models VHDL would be manipulated by Xilinx tools to produce the bitstream that is programmed into an FPGA or Zynq device. In the study of the second point, we find a number of conflicting elements that affect the synthesis Vivado HLS designs in C generated by ORCC. These elements are related to the way it is structured specification in C generated ORCC and Vivado HLS cannot hold at certain times of the synthesis. Thus it has proposed a "manual" transformation of designs generated by ORCC that affected as little as possible to the original in order to perform the synthesis Vivado HLS and create the correct file VHDL models. Thus this document is structured along the lines of a research. First, the motivations and objectives that support and hope to reach in this work are presented. Then it shows an analysis the state of the art of the elements necessary for its development, providing the basics for a correct understanding and study of the document. A description of the RVC-CAL and VHDL languages is made, in addition an introduction of the ORCC and Vivado tools, analyzing the advantages and main features of both. Once you know the behavior of both tools, the solutions developed in our study of the synthesis of RVC-CAL models, introducing the conflicting points mentioned above are described that Vivado HLS cannot stand in the synthesis of design in C language generated by ORCC compiler. Below the proposed solutions to these errors occurred during synthesis, with which it is intended to achieve optimum C specification for proper synthesis Vivado HLS and thus create the appropriate VHDL models are presented. Finally, as the end result of this work a set of conclusions on all analyzes and developments occurred in the same are removed. At the same time a series of future lines of work which could continue to study and complete the research developed in this document are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements We would like to thank Erik Rexstad and Rob Williams for useful reviews of this manuscript. The collection of visual and acoustic data was funded by the UK Department of Energy & Climate Change, the Scottish Government, Collaborative Offshore Wind Research into the Environment (COWRIE) and Oil & Gas UK. Digital aerial surveys were funded by Moray Offshore Renewables Ltd and additional funding for analysis of the combined datasets was provided by Marine Scotland. Collaboration between the University of Aberdeen and Marine Scotland was supported by MarCRF. We thank colleagues at the University of Aberdeen, Moray First Marine, NERI, Hi-Def Aerial Surveying Ltd and Ravenair for essential support in the field, particularly Tim Barton, Bill Ruck, Rasmus Nielson and Dave Rutter. Thanks also to Andy Webb, David Borchers, Len Thomas, Kelly McLeod, David L. Miller, Dinara Sadykova and Thomas Cornulier for advice on survey design and statistical approache. Data Accessibility Data are available from the Dryad Digital Repository: http://dx.doi.org/10.5061/dryad.cf04g

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Several clusters of red supergiants have been discovered in a small region of the Milky Way close to the base of the Scutum-Crux Arm and the tip of the Long Bar. Population synthesis models indicate that they must be very massive to harbour so many supergiants. Amongst these clusters, Stephenson 2, with a core grouping of 26 red supergiants, is a strong candidate to be the most massive young cluster in the Galaxy. Aims. Stephenson 2 is located close to a region where a strong over-density of red supergiants had been found. We explore the actual cluster size and its possible connection to this over-density. Methods. Taking advantage of Virtual Observatory tools, we have performed a cross-match between the DENIS, USNO-B1 and 2MASS catalogues to identify candidate obscured luminous red stars around Stephenson 2, and in a control nearby region. More than 600 infrared bright stars fulfill our colour criteria, with the vast majority having a counterpart in the I band and >400 being sufficiently bright in I to allow observation with a 4-m class telescope. We observed a subsample of ~250 stars, using the multi-object, wide-field, fibre spectrograph AF2 on the WHT telescope in La Palma, obtaining intermediate-resolution spectroscopy in the 7500–9000 Å range. We derived spectral types and luminosity classes for all these objects and measured their radial velocities. Results. Our targets turned out to be G and K supergiants, late (≥ M4) M giants, and M-type bright giants (luminosity class II) and supergiants. We found ~35 red supergiants with radial velocities similar to Stephenson 2 members, spread over the two areas surveyed. In addition, we found ~40 red supergiants with radial velocities incompatible in principle with a physical association. Conclusions. Our results show that Stephenson 2 is not an isolated cluster, but part of a huge structure likely containing hundreds of red supergiants, with radial velocities compatible with the terminal velocity at this Galactic longitude (and a distance ~6 kpc). In addition, we found evidence of several populations of massive stars at different distances along this line of sight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wealth of open educational resources (OER) focused on green topics is currently available through a variety of sources, including learning portals, digital repositories and web sites. However, in most cases these resources are not easily accessible and retrievable, while additional issues further complicate this issue. This paper presents an overview of a number of portals hosting OER, as well as a number of “green” thematic portals that provide access to green OER. It also discusses the case of a new collection that aims to support and populate existing green collections and learning portals respectively, providing information on aspects such as quality assurance/collection and curation policies, workflow and tools for both the content and metadata records that apply to the collection. Two case studies of the integration of this new collection to existing learning portals are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The pupillary light reflex characterizes the direct and consensual response of the eye to the perceived brightness of a stimulus. It has been used as indicator of both neurological and optic nerve pathologies. As with other eye reflexes, this reflex constitutes an almost instantaneous movement and is linked to activation of the same midbrain area. The latency of the pupillary light reflex is around 200 ms, although the literature also indicates that the fastest eye reflexes last 20 ms. Therefore, a system with sufficiently high spatial and temporal resolutions is required for accurate assessment. In this study, we analyzed the pupillary light reflex to determine whether any small discrepancy exists between the direct and consensual responses, and to ascertain whether any other eye reflex occurs before the pupillary light reflex. Methods: We constructed a binocular video-oculography system two high-speed cameras that simultaneously focused on both eyes. This was then employed to assess the direct and consensual responses of each eye using our own algorithm based on Circular Hough Transform to detect and track the pupil. Time parameters describing the pupillary light reflex were obtained from the radius time-variation. Eight healthy subjects (4 women, 4 men, aged 24–45) participated in this experiment. Results: Our system, which has a resolution of 15 microns and 4 ms, obtained time parameters describing the pupillary light reflex that were similar to those reported in previous studies, with no significant differences between direct and consensual reflexes. Moreover, it revealed an incomplete reflex blink and an upward eye movement at around 100 ms that may correspond to Bell’s phenomenon. Conclusions: Direct and consensual pupillary responses do not any significant temporal differences. The system and method described here could prove useful for further assessment of pupillary and blink reflexes. The resolution obtained revealed the existence reported here of an early incomplete blink and an upward eye movement.