968 resultados para Visual Basic (Programming Language)
Resumo:
Recently, experts and practitioners in language resources have started recognizing the benefits of the linked data (LD) paradigm for the representation and exploitation of linguistic data on the Web. The adoption of the LD principles is leading to an emerging ecosystem of multilingual open resources that conform to the Linguistic Linked Open Data Cloud, in which datasets of linguistic data are interconnected and represented following common vocabularies, which facilitates linguistic information discovery, integration and access. In order to contribute to this initiative, this paper summarizes several key aspects of the representation of linguistic information as linked data from a practical perspective. The main goal of this document is to provide the basic ideas and tools for migrating language resources (lexicons, corpora, etc.) as LD on the Web and to develop some useful NLP tasks with them (e.g., word sense disambiguation). Such material was the basis of a tutorial imparted at the EKAW’14 conference, which is also reported in the paper.
Resumo:
La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.
Resumo:
En este proyecto, se presenta un informe técnico sobre la cámara Leap Motion y el Software Development Kit correspondiente, el cual es un dispositivo con una cámara de profundidad orientada a interfaces hombre-máquina. Esto es realizado con el propósito de desarrollar una interfaz hombre-máquina basada en un sistema de reconocimiento de gestos de manos. Después de un exhaustivo estudio de la cámara Leap Motion, se han realizado diversos programas de ejemplo con la intención de verificar las capacidades descritas en el informe técnico, poniendo a prueba la Application Programming Interface y evaluando la precisión de las diferentes medidas obtenidas sobre los datos de la cámara. Finalmente, se desarrolla un prototipo de un sistema de reconocimiento de gestos. Los datos sobre la posición y orientación de la punta de los dedos obtenidos de la Leap Motion son usados para describir un gesto mediante un vector descriptor, el cual es enviado a una Máquina Vectores Soporte, utilizada como clasificador multi-clase.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
Este proyecto fin de grado presenta dos herramientas, Papify y Papify-Viewer, para medir y visualizar, respectivamente, las prestaciones a bajo nivel de especificaciones RVC-CAL basándose en eventos hardware. RVC-CAL es un lenguaje de flujo de datos estandarizado por MPEG y utilizado para definir herramientas relacionadas con la codificación de vídeo. La estructura de los programas descritos en RVC-CAL se basa en unidades funcionales llamadas actores, que a su vez se subdividen en funciones o procedimientos llamados acciones. ORCC (Open RVC-CAL Compiler) es un compilador de código abierto que utiliza como entrada descripciones RVC-CAL y genera a partir de ellas código fuente en un lenguaje dado, como por ejemplo C. Internamente, el compilador ORCC se divide en tres etapas distinguibles: front-end, middle-end y back-end. La implementación de Papify consiste en modificar la etapa del back-end del compilador, encargada de la generación de código, de modo tal que los actores, al ser traducidos a lenguaje C, queden instrumentados con PAPI (Performance Application Programing Interface), una herramienta utilizada como interfaz a los registros contadores de rendimiento (PMC) de los procesadores. Además, también se modifica el front-end para permitir identificar cierto tipo de anotaciones en las descripciones RVC-CAL, utilizadas para que el diseñador pueda indicar qué actores o acciones en particular se desean analizar. Los actores instrumentados, además de conservar su funcionalidad original, generan una serie de ficheros que contienen datos sobre los distintos eventos hardware que suceden a lo largo de su ejecución. Los eventos incluidos en estos ficheros son configurables dentro de las anotaciones previamente mencionadas. La segunda herramienta, Papify-Viewer, utiliza los datos generados por Papify y los procesa, obteniendo una representación visual de la información a dos niveles: por un lado, representa cronológicamente la ejecución de la aplicación, distinguiendo cada uno de los actores a lo largo de la misma. Por otro lado, genera estadísticas sobre la cantidad de eventos disparados por acción, actor o núcleo de ejecución y las representa mediante gráficos de barra. Ambas herramientas pueden ser utilizadas en conjunto para verificar el funcionamiento del programa, balancear la carga de los actores o la distribución por núcleos de los mismos, mejorar el rendimiento y diagnosticar problemas. ABSTRACT. This diploma project presents two tools, Papify and Papify-Viewer, used to measure and visualize the low level performance of RVC-CAL specifications based on hardware events. RVC-CAL is a dataflow language standardized by MPEG which is used to define video codec tools. The structure of the applications described in RVC-CAL is based on functional units called actors, which are in turn divided into smaller procedures called actions. ORCC (Open RVC-CAL Compiler) is an open-source compiler capable of transforming RVC-CAL descriptions into source code in a given language, such as C. Internally, the compiler is divided into three distinguishable stages: front-end, middle-end and back-end. Papify’s implementation consists of modifying the compiler’s back-end stage, which is responsible for generating the final source code, so that translated actors in C code are now instrumented with PAPI (Performance Application Programming Interface), a tool that provides an interface to the microprocessor’s performance monitoring counters (PMC). In addition, the front-end is also modified in such a way that allows identification of a certain type of annotations in the RVC-CAL descriptions, allowing the designer to set the actors or actions to be included in the measurement. Besides preserving their initial behavior, the instrumented actors will also generate a set of files containing data about the different events triggered throughout the program’s execution. The events included in these files can be configured inside the previously mentioned annotations. The second tool, Papify-Viewer, makes use of the files generated by Papify to process them and provide a visual representation of the information in two different ways: on one hand, a chronological representation of the application’s execution where each actor has its own timeline. On the other hand, statistical information is generated about the amount of triggered events per action, actor or core. Both tools can be used together to assert the normal functioning of the program, balance the load between actors or cores, improve performance and identify problems.
Resumo:
En el futuro, la gestión del tráfico aéreo (ATM, del inglés air traffic management) requerirá un cambio de paradigma, de la gestión principalmente táctica de hoy, a las denominadas operaciones basadas en trayectoria. Un incremento en el nivel de automatización liberará al personal de ATM —controladores, tripulación, etc.— de muchas de las tareas que realizan hoy. Las personas seguirán siendo el elemento central en la gestión del tráfico aéreo del futuro, pero lo serán mediante la gestión y toma de decisiones. Se espera que estas dos mejoras traigan un incremento en la eficiencia de la gestión del tráfico aéreo que permita hacer frente al incremento previsto en la demanda de transporte aéreo. Para aplicar el concepto de operaciones basadas en trayectoria, el usuario del espacio aéreo (la aerolínea, piloto, u operador) y el proveedor del servicio de navegación aérea deben negociar las trayectorias mediante un proceso de toma de decisiones colaborativo. En esta negociación, es necesaria una forma adecuada de compartir dichas trayectorias. Compartir la trayectoria completa requeriría un gran ancho de banda, y la trayectoria compartida podría invalidarse si cambiase la predicción meteorológica. En su lugar, podría compartirse una descripción de la trayectoria independiente de las condiciones meteorológicas, de manera que la trayectoria real se pudiese calcular a partir de dicha descripción. Esta descripción de la trayectoria debería ser fácil de procesar usando un programa de ordenador —ya que parte del proceso de toma de decisiones estará automatizado—, pero también fácil de entender para un operador humano —que será el que supervise el proceso y tome las decisiones oportunas—. Esta tesis presenta una serie de lenguajes formales que pueden usarse para este propósito. Estos lenguajes proporcionan los medios para describir trayectorias de aviones durante todas las fases de vuelo, desde la maniobra de push-back (remolcado hasta la calle de rodaje), hasta la llegada a la terminal del aeropuerto de destino. También permiten describir trayectorias tanto de aeronaves tripuladas como no tripuladas, incluyendo aviones de ala fija y cuadricópteros. Algunos de estos lenguajes están estrechamente relacionados entre sí, y organizados en una jerarquía. Uno de los lenguajes fundamentales de esta jerarquía, llamado aircraft intent description language (AIDL), ya había sido desarrollado con anterioridad a esta tesis. Este lenguaje fue derivado de las ecuaciones del movimiento de los aviones de ala fija, y puede utilizarse para describir sin ambigüedad trayectorias de este tipo de aeronaves. Una variante de este lenguaje, denominada quadrotor AIDL (QR-AIDL), ha sido desarrollada en esta tesis para permitir describir trayectorias de cuadricópteros con el mismo nivel de detalle. Seguidamente, otro lenguaje, denominado intent composite description language (ICDL), se apoya en los dos lenguajes anteriores, ofreciendo más flexibilidad para describir algunas partes de la trayectoria y dejar otras sin especificar. El ICDL se usa para proporcionar descripciones genéricas de maniobras comunes, que después se particularizan y combinan para formar descripciones complejas de un vuelo. Otro lenguaje puede construirse a partir del ICDL, denominado flight intent description language (FIDL). El FIDL especifica requisitos de alto nivel sobre las trayectorias —incluyendo restricciones y objetivos—, pero puede utilizar características del ICDL para proporcionar niveles de detalle arbitrarios en las distintas partes de un vuelo. Tanto el ICDL como el FIDL han sido desarrollados en colaboración con Boeing Research & Technology Europe (BR&TE). También se ha desarrollado un lenguaje para definir misiones en las que interactúan varias aeronaves, el mission intent description language (MIDL). Este lenguaje se basa en el FIDL y mantiene todo su poder expresivo, a la vez que proporciona nuevas semánticas para describir tareas, restricciones y objetivos relacionados con la misión. En ATM, los movimientos de un avión en la superficie de aeropuerto también tienen que ser monitorizados y gestionados. Otro lenguaje formal ha sido diseñado con este propósito, llamado surface movement description language (SMDL). Este lenguaje no pertenece a la jerarquía de lenguajes descrita en el párrafo anterior, y se basa en las clearances (autorizaciones del controlador) utilizadas durante las operaciones en superficie de aeropuerto. También proporciona medios para expresar incertidumbre y posibilidad de cambios en las distintas partes de la trayectoria. Finalmente, esta tesis explora las aplicaciones de estos lenguajes a la predicción de trayectorias y a la planificación de misiones. El concepto de trajectory language processing engine (TLPE) se usa en ambas aplicaciones. Un TLPE es una función de ATM cuya principal entrada y salida se expresan en cualquiera de los lenguajes incluidos en la jerarquía descrita en esta tesis. El proceso de predicción de trayectorias puede definirse como una combinación de TLPEs, cada uno de los cuales realiza una pequeña sub-tarea. Se le ha dado especial importancia a uno de estos TLPEs, que se encarga de generar el perfil horizontal, vertical y de configuración de la trayectoria. En particular, esta tesis presenta un método novedoso para la generación del perfil vertical. El proceso de planificar una misión también se puede ver como un TLPE donde la entrada se expresa en MIDL y la salida consiste en cierto número de trayectorias —una por cada aeronave disponible— descritas utilizando FIDL. Se ha formulado este problema utilizando programación entera mixta. Además, dado que encontrar caminos óptimos entre distintos puntos es un problema fundamental en la planificación de misiones, también se propone un algoritmo de búsqueda de caminos. Este algoritmo permite calcular rápidamente caminos cuasi-óptimos que esquivan todos los obstáculos en un entorno urbano. Los diferentes lenguajes formales definidos en esta tesis pueden utilizarse como una especificación estándar para la difusión de información entre distintos actores de la gestión del tráfico aéreo. En conjunto, estos lenguajes permiten describir trayectorias con el nivel de detalle necesario en cada aplicación, y se pueden utilizar para aumentar el nivel de automatización explotando esta información utilizando sistemas de soporte a la toma de decisiones. La aplicación de estos lenguajes a algunas funciones básicas de estos sistemas, como la predicción de trayectorias, han sido analizadas. ABSTRACT Future air traffic management (ATM) will require a paradigm shift from today’s mainly tactical ATM to trajectory-based operations (TBOs). An increase in the level of automation will also relieve humans —air traffic control officers (ATCOs), flight crew, etc.— from many of the tasks they perform today. Humans will still be central in this future ATM, as decision-makers and managers. These two improvements (TBOs and increased automation) are expected to provide the increase in ATM performance that will allow coping with the expected increase in air transport demand. Under TBOs, trajectories are negotiated between the airspace user (an airline, pilot, or operator) and the air navigation service provider (ANSP) using a collaborative decision making (CDM) process. A suitable method for sharing aircraft trajectories is necessary for this negotiation. Sharing a whole trajectory would require a high amount of bandwidth, and the shared trajectory might become invalid if the weather forecast changed. Instead, a description of the trajectory, decoupled from the weather conditions, could be shared, so that the actual trajectory could be computed from this trajectory description. This trajectory description should be easy to process using a computing program —as some of the CDM processes will be automated— but also easy to understand for a human operator —who will be supervising the process and making decisions. This thesis presents a series of formal languages that can be used for this purpose. These languages provide the means to describe aircraft trajectories during all phases of flight, from push back to arrival at the gate. They can also describe trajectories of both manned and unmanned aircraft, including fixedwing and some rotary-wing aircraft (quadrotors). Some of these languages are tightly interrelated and organized in a language hierarchy. One of the key languages in this hierarchy, the aircraft intent description language (AIDL), had already been developed prior to this thesis. This language was derived from the equations of motion of fixed-wing aircraft, and can provide an unambiguous description of fixed-wing aircraft trajectories. A variant of this language, the quadrotor AIDL (QR-AIDL), is developed in this thesis to allow describing a quadrotor aircraft trajectory with the same level of detail. Then, the intent composite description language (ICDL) is built on top of these two languages, providing more flexibility to describe some parts of the trajectory while leaving others unspecified. The ICDL is used to provide generic descriptions of common aircraft manoeuvres, which can be particularized and combined to form complex descriptions of flight. Another language is built on top of the ICDL, the flight intent description language (FIDL). The FIDL specifies high-level requirements on trajectories —including constraints and objectives—, but can use features of the ICDL to provide arbitrary levels of detail in different parts of the flight. The ICDL and FIDL have been developed in collaboration with Boeing Research & Technology Europe (BR&TE). Also, the mission intent description language (MIDL) has been developed to allow describing missions involving multiple aircraft. This language is based on the FIDL and keeps all its expressive power, while it also provides new semantics for describing mission tasks, mission objectives, and constraints involving several aircraft. In ATM, the movement of aircraft while on the airport surface also has to be monitored and managed. Another formal language has been designed for this purpose, denoted surface movement description language (SMDL). This language does not belong to the language hierarchy described above, and it is based on the clearances used in airport surface operations. Means to express uncertainty and mutability of different parts of the trajectory are also provided. Finally, the applications of these languages to trajectory prediction and mission planning are explored in this thesis. The concept of trajectory language processing engine (TLPE) is used in these two applications. A TLPE is an ATM function whose main input and output are expressed in any of the languages in the hierarchy described in this thesis. A modular trajectory predictor is defined as a combination of multiple TLPEs, each of them performing a small subtask. Special attention is given to the TLPE that builds the horizontal, vertical, and configuration profiles of the trajectory. In particular, a novel method for the generation of the vertical profile is presented. The process of planning a mission can also be seen as a TLPE, where the main input is expressed in the MIDL and the output consists of a number of trajectory descriptions —one for each aircraft available in the mission— expressed in the FIDL. A mixed integer linear programming (MILP) formulation for the problem of assigning mission tasks to the available aircraft is provided. In addition, since finding optimal paths between locations is a key problem to mission planning, a novel path finding algorithm is presented. This algorithm can compute near-shortest paths avoiding all obstacles in an urban environment in very short times. The several formal languages described in this thesis can serve as a standard specification to share trajectory information among different actors in ATM. In combination, these languages can describe trajectories with the necessary level of detail for any application, and can be used to increase automation by exploiting this information using decision support tools (DSTs). Their applications to some basic functions of DSTs, such as trajectory prediction, have been analized.
Resumo:
Several languages have been proposed for the task of describing networks of systems, either to help on managing, simulate or deploy testbeds for testing purposes. However, there is no one specifically designed to describe the honeynets, covering the specific characteristics in terms of applications and tools included in the honeypot systems that make the honeynet. In this paper, the requirements of honeynet description are studied and a survey of existing description languages is presented, concluding that a CIM (Common Information Model) match the basic requirements. Thus, a CIM like technology independent honeynet description language (TIHDL) is proposed. The language is defined being independent of the platform where the honeynet will be deployed later, and it can be translated, either using model-driven techniques or other translation mechanisms, into the description languages of honeynet deployment platforms and tools. This approach gives flexibility to allow the use of a combination of heterogeneous deployment platforms. Besides, a flexible virtual honeynet generation tool (HoneyGen) based on the approach and description language proposed and capable of deploying honeynets over VNX (Virtual Networks over LinuX) and Honeyd platforms is presented for validation purposes.
Resumo:
Working memory is the process of actively maintaining a representation of information for a brief period of time so that it is available for use. In monkeys, visual working memory involves the concerted activity of a distributed neural system, including posterior areas in visual cortex and anterior areas in prefrontal cortex. Within visual cortex, ventral stream areas are selectively involved in object vision, whereas dorsal stream areas are selectively involved in spatial vision. This domain specificity appears to extend forward into prefrontal cortex, with ventrolateral areas involved mainly in working memory for objects and dorsolateral areas involved mainly in working memory for spatial locations. The organization of this distributed neural system for working memory in monkeys appears to be conserved in humans, though some differences between the two species exist. In humans, as compared with monkeys, areas specialized for object vision in the ventral stream have a more inferior location in temporal cortex, whereas areas specialized for spatial vision in the dorsal stream have a more superior location in parietal cortex. Displacement of both sets of visual areas away from the posterior perisylvian cortex may be related to the emergence of language over the course of brain evolution. Whereas areas specialized for object working memory in humans and monkeys are similarly located in ventrolateral prefrontal cortex, those specialized for spatial working memory occupy a more superior and posterior location within dorsal prefrontal cortex in humans than in monkeys. As in posterior cortex, this displacement in frontal cortex also may be related to the emergence of new areas to serve distinctively human cognitive abilities.
Resumo:
Cerebral organization during sentence processing in English and in American Sign Language (ASL) was characterized by employing functional magnetic resonance imaging (fMRI) at 4 T. Effects of deafness, age of language acquisition, and bilingualism were assessed by comparing results from (i) normally hearing, monolingual, native speakers of English, (ii) congenitally, genetically deaf, native signers of ASL who learned English late and through the visual modality, and (iii) normally hearing bilinguals who were native signers of ASL and speakers of English. All groups, hearing and deaf, processing their native language, English or ASL, displayed strong and repeated activation within classical language areas of the left hemisphere. Deaf subjects reading English did not display activation in these regions. These results suggest that the early acquisition of a natural language is important in the expression of the strong bias for these areas to mediate language, independently of the form of the language. In addition, native signers, hearing and deaf, displayed extensive activation of homologous areas within the right hemisphere, indicating that the specific processing requirements of the language also in part determine the organization of the language systems of the brain.
Resumo:
Abundant research has shown that poverty has negative influences on young child academic and psychosocial development, and unfortunately, disparities in school readiness between low and high income children can be seen as early the first year of life. The largest federal early care and education intervention for these vulnerable children is Early Head Start (EHS). To diminish these disparate child outcomes, EHS seeks to provide community based flexible programming for infants and toddlers and their families. Given how relatively recent these programs have been offered, little is known about the nuances of how EHS impacts infant and toddler language and psychosocial development. Using a framework of Community Based Participatory Research (CBPR) this paper had 5 goals: 1) to characterize the associations between domain specific and cumulative risk and child outcomes 2) to validate and explore these risk-outcome associations separately for Children of Hispanic immigrants (COHIs), 3) to explore relationships among family characteristics, multiple environmental factors, and dosage patterns in different EHS program types, 4) to examine the relationship between EHS dosage and child outcomes, and 5) to examine how EHS compliance impacts child internalizing and externalizing behaviors and emerging language abilities. Results of the current study showed that risks were differentially related to child outcomes. Poor maternal mental health was related to child internalizing and externalizing behaviors, but not related to emerging child language skills. Although child language skills were not related to maternal mental health, they were related to economic hardship. Additionally, parent level Spanish use and heritage orientation were associated with positive child outcomes. Results also showed that these relationships differed when COHIs and children with native-born parents were examined separately. Further, unique patterns emerged for EHS program use, for example families who participated in home-based care were less likely to comply with EHS attendance requirements. These findings provide tangible suggestions for EHS stakeholders: namely, the need to develop effective programming that targets engagement for diverse families enrolled in EHS programs.
Resumo:
In recent years, several explanatory models have been developed which attempt to analyse the predictive worth of various factors in relation to academic achievement, as well as the direct and indirect effects that they produce. The aim of this study was to examine a structural model incorporating various cognitive and motivational variables which influence student achievement in the two basic core skills in the Spanish curriculum: Spanish Language and Mathematics. These variables included differential aptitudes, specific self-concept, goal orientations, effort and learning strategies. The sample comprised 341 Spanish students in their first year of Compulsory Secondary Education. Various tests and questionnaires were used to assess each student, and Structural Equation Modelling (SEM) was employed to study the relationships in the initial model. The proposed model obtained a satisfactory fit for the two subjects studied, and all the relationships hypothesised were significant. The variable with the most explanatory power regarding academic achievement was mathematical and verbal aptitude. Also notable was the direct influence of specific self-concept on achievement, goal-orientation and effort, as was the mediatory effect that effort and learning strategies had between academic goals and final achievement.
Resumo:
In this paper we describe Fénix, a data model for exchanging information between Natural Language Processing applications. The format proposed is intended to be flexible enough to cover both current and future data structures employed in the field of Computational Linguistics. The Fénix architecture is divided into four separate layers: conceptual, logical, persistence and physical. This division provides a simple interface to abstract the users from low-level implementation details, such as programming languages and data storage employed, allowing them to focus in the concepts and processes to be modelled. The Fénix architecture is accompanied by a set of programming libraries to facilitate the access and manipulation of the structures created in this framework. We will also show how this architecture has been already successfully applied in different research projects.
Resumo:
The aim of the project is to determine if the understanding of the language of Mathematics of students starting university is propitious to the development of an appropriate cognitive structure. The objective of this current work was to analyse the ability of first-year university students to translate the registers of verbal or written expressions and their representations to the registers of algebraic language. Results indicate that students do not understand the basic elements of the language of Mathematics and this causes them to make numerous errors of construction and interpretation. The students were not able to associate concepts with definitions and were unable to offer examples.
Resumo:
Está ampliamente aceptado que es fundamental desarrollar la habilidad de resolver problemas. El pensamiento computacional se basa en resolver problemas haciendo uso de conceptos fundamentales de la informática. Nada mejor para desarrollar la habilidad de resolver problemas usando conceptos informáticos que una asignatura de introducción a la programación. Este trabajo presenta nuestras reflexiones acerca de cómo iniciar a un estudiante en el campo de la programación de computadores. El trabajo no detalla los contenidos a impartir, sino que se centra en aspectos metodológicos, con la inclusión de experiencias y ejemplos concretos, a la vez que generales, extensibles a cualquier enseñanza de programación. En general, aunque se van desarrollado lenguajes cada vez más cercanos al lenguaje humano, la programación de ordenadores utilizando lenguajes formales no es una materia intuitiva y de fácil comprensión por parte de los estudiantes. A la persona que ya sabe programar le parece una tarea sencilla, pero al neófito no. Es más, dominar el arte de la programación es complejo. Por esta razón es indispensable utilizar todas las técnicas y herramientas posibles que faciliten dicha labor.
Resumo:
This thesis explores the role of multimodality in language learners’ comprehension, and more specifically, the effects on students’ audio-visual comprehension when different orchestrations of modes appear in the visualization of vodcasts. Firstly, I describe the state of the art of its three main areas of concern, namely the evolution of meaning-making, Information and Communication Technology (ICT), and audio-visual comprehension. One of the most important contributions in the theoretical overview is the suggested integrative model of audio-visual comprehension, which attempts to explain how students process information received from different inputs. Secondly, I present a study based on the following research questions: ‘Which modes are orchestrated throughout the vodcasts?’, ‘Are there any multimodal ensembles that are more beneficial for students’ audio-visual comprehension?’, and ‘What are the students’ attitudes towards audio-visual (e.g., vodcasts) compared to traditional audio (e.g., audio tracks) comprehension activities?’. Along with these research questions, I have formulated two hypotheses: Audio-visual comprehension improves when there is a greater number of orchestrated modes, and students have a more positive attitude towards vodcasts than traditional audios when carrying out comprehension activities. The study includes a multimodal discourse analysis, audio-visual comprehension tests, and students’ questionnaires. The multimodal discourse analysis of two British Council’s language learning vodcasts, entitled English is GREAT and Camden Fashion, using ELAN as the multimodal annotation tool, shows that there are a variety of multimodal ensembles of two, three and four modes. The audio-visual comprehension tests were given to 40 Spanish students, learning English as a foreign language, after the visualization of vodcasts. These comprehension tests contain questions related to specific orchestrations of modes appearing in the vodcasts. The statistical analysis of the test results, using repeated-measures ANOVA, reveal that students obtain better audio-visual comprehension results when the multimodal ensembles are constituted by a greater number of orchestrated modes. Finally, the data compiled from the questionnaires, conclude that students have a more positive attitude towards vodcasts in comparison to traditional audio listenings. Results from the audio-visual comprehension tests and questionnaires prove the two hypotheses of this study.