816 resultados para Embedded System, Domain Specific Language (DSL), Agenti BDI, Arduino, Agentino
Resumo:
Several theories assume that successful team coordination is partly based on knowledge that helps anticipating individual contributions necessary in a situational task. It has been argued that a more ecological perspective needs to be considered in contexts evolving dynamically and unpredictably. In football, defensive plays are usually coordinated according to strategic concepts spanning all members and large areas of the playfield. On the other hand, fewer people are involved in offensive plays as these are less projectable and strongly constrained by ecological characteristics. The aim of this study is to test the effects of ecological constraints and player knowledge on decision making in offensive game scenarios. It is hypothesized that both knowledge about team members and situational constraints will influence decisional processes. Effects of situational constraints are expected to be of higher magnitude. Two teams playing in the fourth league of the Swiss Football Federation participate in the study. Forty customized game scenarios were developed based on the coaches’ information about player positions and game strategies. Each player was shown in ball possession four times. Participants were asked to take the perspective of the player on the ball and to choose a passing destination and a recipient. Participants then rated domain specific strengths (e.g., technical skills, game intelligence) of each of their teammates. Multilevel models for categorical dependent variables (team members) will be specified. Player knowledge (rated skills) and ecological constraints (operationalized as each players’ proximity and availability for ball reception) are included as predictor variables. Data are currently being collected. Results will yield effects of parameters that are stable across situations as well as of variable parameters that are bound to situational context. These will enable insight into the degree to which ecological constraints and more enduring team knowledge are involved in decisional processes aimed at coordinating interpersonal action.
Resumo:
Mesozooplankton is collected by vertical tows within the Black sea water body mass layer in the NE Aegean, using a WP-2 200 µm net equipped with a large non-filtering cod-end (10 l). Macrozooplankton organisms are removed using a 2000 µm net. A few unsorted animals (approximately 100) are placed inside several glass beaker of 250 ml filled with GF/F or 0.2 µm Nucleopore filtered seawater and with a 100 µm net placed 1 cm above the beaker bottom. Beakers are then placed in an incubator at natural light and maintaining the in situ temperature. After 1 hour pellets are separated from animals and placed in separated flasks and preserved with formalin. Pellets are counted and measured using an inverted microscope. Animals are scanned and counted using an image analysis system. Carbon- Specific faecal pellet production is calculated from a) faecal pellet production, b) individual carbon: Animals are scanned and their body area is measured using an image analysis system. Body volume is then calculated as an ellipsoid using the major and minor axis of an ellipse of same area as the body. Individual carbon is calculated from a carbon- total body volume of organisms (relationship obtained for the Mediterranean Sea by Alcaraz et al. (2003) divided by the total number of individuals scanned and c) faecal pellet carbon: Faecal pellet length and width is measured using an inverted microscope. Faecal pellet volume is calculated from length and width assuming cylindrical shape. Conversion of faecal pellet volume to carbon is done using values obtained in the Mediterranean from: a) faecal pellet density 1,29 g cm**3 (or pg µm**3) from Komar et al. (1981); b) faecal pellet DW/WW=0,23 from Elder and Fowler (1977) and c) faecal pellet C%DW=25,5 Marty et al. (1994).
Resumo:
The SES_UNLUATA_GR1-Mesozooplankton faecal pellet production rates dataset is based on samples taken during March and April 2008 in the Northern Libyan Sea, Southern Aegean Sea and in the North-Eastern Aegean Sea. Mesozooplankton is collected by vertical tows within the 0-100 m layer or within the Black sea water body mass layer in the case of the NE Aegean, using a WP-2 200 µm net equipped with a large non-filtering cod-end (10 l). Macrozooplankton organisms are removed using a 2000 µm net. A few unsorted animals (approximately 100) are placed inside several glass beaker of 250 ml filled with GF/F or 0.2 µm Nucleopore filtered seawater and with a 100 µm net placed 1 cm above the beaker bottom. Beakers are then placed in an incubator at natural light and maintaining the in situ temperature. After 1 hour pellets are separated from animals and placed in separated flasks and preserved with formalin. Pellets and are counted and measured using an inverted microscope. Animals are scanned and counted using an image analysis system. Carbon- Specific faecal pellet production is calculated from a) faecal pellet production, b) individual carbon: Animals are scanned and their body area is measured using an image analysis system. Body volume is then calculated as an ellipsoid using the major and minor axis of an ellipse of same area as the body. Individual carbon is calculated from a carbon- total body volume of organisms (relationship obtained for the Mediterranean Sea by Alcaraz et al. (2003) divided by the total number of individuals scanned and c) faecal pellet carbon: Faecal pellet length and width is measured using an inverted microscope. Faecal pellet volume is calculated from length and width assuming cylindrical shape. Conversion of faecal pellet volume to carbon is done using values obtained in the Mediterranean from: a) faecal pellet density 1,29 g cm**3 (or pg µm**3) from Komar et al. (1981); b) faecal pellet DW/WW=0,23 from Elder and Fowler (1977) and c) faecal pellet C%DW=25,5 Marty et al. (1994).
Resumo:
The SES_GR2-Mesozooplankton faecal pellet production rates dataset is based on samples taken during August and September 2008 in the Northern Libyan Sea, Southern Aegean Sea and the North-Eastern Aegean Sea. Mesozooplankton is collected by vertical tows within the 0-100 m layer or within the Black sea water body mass layer in the case of the NE Aegean, using a WP-2 200 µm net equipped with a large non-filtering cod-end (10 l). Macrozooplankton organisms are removed using a 2000 µm net. A few unsorted animals (approximately 100) are placed inside several glass beaker of 250 ml filled with GF/F or 0.2 µm Nucleopore filtered seawater and with a 100 µm net placed 1 cm above the beaker bottom. Beakers are then placed in an incubator at natural light and maintaining the in situ temperature. After 1 hour pellets are separated from animals and placed in separated flasks and preserved with formalin. Pellets are counted and measured using an inverted microscope. Animals are scanned and counted using an image analysis system. Carbon- Specific faecal pellet production is calculated from a) faecal pellet production, b) individual carbon: Animals are scanned and their body area is measured using an image analysis system. Body volume is then calculated as an ellipsoid using the major and minor axis of an ellipse of same area as the body. Individual carbon is calculated from a carbon- total body volume of organisms (relationship obtained for the Mediterranean Sea by Alcaraz et al. (2003) divided by the total number of individuals scanned and c) faecal pellet carbon: Faecal pellet length and width is measured using an inverted microscope. Faecal pellet volume is calculated from length and width assuming cylindrical shape. Conversion of faecal pellet volume to carbon is done using values obtained in the Mediterranean from: a) faecal pellet density 1,29 g cm**3 (or pg µm**3) from Komar et al. (1981); b) faecal pellet DW/WW=0,23 from Elder and Fowler (1977) and c) faecal pellet C%DW=25,5 Marty et al. (1994).
Resumo:
The SES_GR1-Mesozooplankton faecal pellet production rates dataset is based on samples taken during April 2008 in the North-Eastern Aegean Sea. Mesozooplankton is collected by vertical tows within the Black sea water body mass layer in the NE Aegean, using a WP-2 200 µm net equipped with a large non-filtering cod-end (10 l). Macrozooplankton organisms are removed using a 2000 µm net. A few unsorted animals (approximately 100) are placed inside several glass beaker of 250 ml filled with GF/F or 0.2 µm Nucleopore filtered seawater and with a 100 µm net placed 1 cm above the beaker bottom. Beakers are then placed in an incubator at natural light and maintaining the in situ temperature. After 1 hour pellets are separated from animals and placed in separated flasks and preserved with formalin. Pellets are counted and measured using an inverted microscope. Animals are scanned and counted using an image analysis system. Carbon- Specific faecal pellet production is calculated from a) faecal pellet production, b) individual carbon: Animals are scanned and their body area is measured using an image analysis system. Body volume is then calculated as an ellipsoid using the major and minor axis of an ellipse of same area as the body. Individual carbon is calculated from a carbon- total body volume of organisms (relationship obtained for the Mediterranean Sea by Alcaraz et al. (2003) divided by the total number of individuals scanned and c) faecal pellet carbon: Faecal pellet length and width is measured using an inverted microscope. Faecal pellet volume is calculated from length and width assuming cylindrical shape. Conversion of faecal pellet volume to carbon is done using values obtained in the Mediterranean from: a) faecal pellet density 1,29 g cm**3 (or pg µm**3) from Komar et al. (1981); b) faecal pellet DW/WW=0,23 from Elder and Fowler (1977) and c) faecal pellet C%DW=25,5 Marty et al. (1994).
Resumo:
Organisms in all domains, Archaea, Bacteria, and Eukarya will respond to climate change with differential vulnerabilities resulting in shifts in species distribution, coexistence, and interactions. The identification of unifying principles of organism functioning across all domains would facilitate a cause and effect understanding of such changes and their implications for ecosystem shifts. For example, the functional specialization of all organisms in limited temperature ranges leads us to ask for unifying functional reasons. Organisms also specialize in either anoxic or various oxygen ranges, with animals and plants depending on high oxygen levels. Here, we identify thermal ranges, heat limits of growth, and critically low (hypoxic) oxygen concentrations as proxies of tolerance in a meta-analysis of data available for marine organisms, with special reference to domain-specific limits. For an explanation of the patterns and differences observed, we define and quantify a proxy for organismic complexity across species from all domains. Rising complexity causes heat (and hypoxia) tolerances to decrease from Archaea to Bacteria to uni- and then multicellular Eukarya. Within and across domains, taxon-specific tolerance limits likely reflect ultimate evolutionary limits of its species to acclimatization and adaptation. We hypothesize that rising taxon-specific complexities in structure and function constrain organisms to narrower environmental ranges. Low complexity as in Archaea and some Bacteria provide life options in extreme environments. In the warmest oceans, temperature maxima reach and will surpass the permanent limits to the existence of multicellular animals, plants and unicellular phytoplankter. Smaller, less complex unicellular Eukarya, Bacteria, and Archaea will thus benefit and predominate even more in a future, warmer, and hypoxic ocean.
Resumo:
La constante evolución de dispositivos portátiles multimedia que se ha producido en la última década ha provocado que hoy en día se disponga de una amplia variedad de dispositivos con capacidad para reproducir contenidos multimedia. En consecuencia, la reproducción de esos contenidos en dichos terminales lleva asociada disponer de procesadores que soporten una alta carga computacional, ya que las tareas de descodificación y presentación de video así lo requieren. Sin embargo, un procesador potente trabajando a elevadas frecuencias provoca un elevado consumo de la batería, y dado que se pretende trabajar con dispositivos portátiles, la vida útil de la batería se convierte en un asunto de especial importancia. La problemática que se plantea se ha convertido en una de las principales líneas de investigación del Grupo de Investigación GDEM (Grupo de Diseño Electrónico y Microelectrónico). En esta línea de trabajo, se persigue cómo optimizar el consumo de energía en terminales portables desde el punto de vista de la reducción de la calidad de experiencia del usuario a cambio de una mayor autonomía del terminal. Por tanto, para lograr esa reducción de la calidad de experiencia mencionada, se requiere un estándar de codificación de vídeo que así lo permita. El Grupo de Investigación GDEM cuenta con experiencia en el estándar de vídeo escalable H.264/SVC, el cual permite degradar la calidad de experiencia en función de las necesidades/características del dispositivo. Más concretamente, un video escalable contiene embebidas distintas versiones del video original que pueden ser descodificadas en diferentes resoluciones, tasas de cuadro y calidades (escalabilidades espacial, temporal y de calidad respectivamente), permitiendo una adaptación rápida y muy flexible. Seleccionado el estándar H.264/SVC para las tareas de vídeo, se propone trabajar con Mplayer, un reproductor de vídeos de código abierto (open source), al cual se le ha integrado un descodificador para vídeo escalable denominado OpenSVC. Por último, como dispositivo portable se trabajará con la plataforma de desarrollo BeagleBoard, un sistema embebido basado en el procesador OMAP3530 que permite modificar la frecuencia de reloj y la tensión de alimentación dinámicamente reduciendo de este modo el consumo del terminal. Este procesador a su vez contiene integrados un procesador de propósito general (ARM Cortex-A8) y un procesador digital de señal (DSP TMS320C64+TM). Debido a la alta carga computacional de la descodificación de vídeos escalables y la escasa optimización del ARM para procesamiento de datos, se propone llevar a cabo la ejecución de Mplayer en el ARM y encargar la tarea de descodificación al DSP, con la finalidad de reducir el consumo y por tanto aumentar la vida útil del sistema embebido sobre el cual se ejecutará la aplicación desarrollada. Una vez realizada esa integración, se llevará a cabo una caracterización del descodificador alojado en el DSP a través de una serie de medidas de rendimiento y se compararán los resultados con los obtenidos en el proceso de descodificación realizado únicamente en el ARM. ABSTRACT During the last years, the multimedia portable terminals have gradually evolved causing that nowadays a several range of devices with the ability of playing multimedia contents are easily available for everyone. Consequently, those multimedia terminals must have high-performance processors to play those contents because the coding and decoding tasks demand high computational load. However, a powerful processor performing to high frequencies implies higher battery consumption, and this issue has become one of the most important problems in the development cycle of a portable terminal. The power/energy consumption optimization on multimedia terminals has become in one the most significant work lines in the Electronic and Microelectronic Research Group of the Universidad Politécnica de Madrid. In particular, the group is researching how to reduce the user‟s Quality of Experience (QoE) quality in exchange for increased battery life. In order to reduce the Quality of Experience (QoE), a standard video coding that allows this operation is required. The H.264/SVC allows reducing the QoE according to the needs/characteristics of the terminal. Specifically, a scalable video contains different versions of original video embedded in an only one video stream, and each one of them can be decoded in different resolutions, frame rates and qualities (spatial, temporal and quality scalabilities respectively). Once the standard video coding is selected, a multimedia player with support for scalable video is needed. Mplayer has been proposed as a multimedia player, whose characteristics (open-source, enormous flexibility and scalable video decoder called OpenSVC) are the most suitable for the aims of this Master Thesis. Lastly, the embedded system BeagleBoard, based on the multi-core processor OMAP3530, will be the development platform used in this project. The multimedia terminal architecture is based on a commercial chip having a General Purpose Processor (GPP – ARM Cortex A8) and a Digital Signal Processor (DSP, TMS320C64+™). Moreover, the processor OMAP3530 has the ability to modify the operating frequency and the supply voltage in a dynamic way in order to reduce the power consumption of the embedded system. So, the main goal of this Master Thesis is the integration of the multimedia player, MPlayer, executed at the GPP, and scalable video decoder, OpenSVC, executed at the DSP in order to distribute the computational load associated with the scalable video decoding task and to reduce the power consumption of the terminal. Once the integration is accomplished, the performance of the OpenSVC decoder executed at the DSP will be measured using different combinations of scalability values. The obtained results will be compared with the scalable video decoding performed at the GPP in order to show the low optimization of this kind of architecture for decoding tasks in contrast to DSP architecture.
Resumo:
Single core capabilities have reached their maximum clock speed; new multicore architectures provide an alternative way to tackle this issue instead. The design of decoding applications running on top of these multicore platforms and their optimization to exploit all system computational power is crucial to obtain best results. Since the development at the integration level of printed circuit boards are increasingly difficult to optimize due to physical constraints and the inherent increase in power consumption, development of multiprocessor architectures is becoming the new Holy Grail. In this sense, it is crucial to develop applications that can run on the new multi-core architectures and find out distributions to maximize the potential use of the system. Today most of commercial electronic devices, available in the market, are composed of embedded systems. These devices incorporate recently multi-core processors. Task management onto multiple core/processors is not a trivial issue, and a good task/actor scheduling can yield to significant improvements in terms of efficiency gains and also processor power consumption. Scheduling of data flows between the actors that implement the applications aims to harness multi-core architectures to more types of applications, with an explicit expression of parallelism into the application. On the other hand, the recent development of the MPEG Reconfigurable Video Coding (RVC) standard allows the reconfiguration of the video decoders. RVC is a flexible standard compatible with MPEG developed codecs, making it the ideal tool to integrate into the new multimedia terminals to decode video sequences. With the new versions of the Open RVC-CAL Compiler (Orcc), a static mapping of the actors that implement the functionality of the application can be done once the application executable has been generated. This static mapping must be done for each of the different cores available on the working platform. It has been chosen an embedded system with a processor with two ARMv7 cores. This platform allows us to obtain the desired tests, get as much improvement results from the execution on a single core, and contrast both with a PC-based multiprocessor system. Las posibilidades ofrecidas por el aumento de la velocidad de la frecuencia de reloj de sistemas de un solo procesador están siendo agotadas. Las nuevas arquitecturas multiprocesador proporcionan una vía de desarrollo alternativa en este sentido. El diseño y optimización de aplicaciones de descodificación de video que se ejecuten sobre las nuevas arquitecturas permiten un mejor aprovechamiento y favorecen la obtención de mayores rendimientos. Hoy en día muchos de los dispositivos comerciales que se están lanzando al mercado están integrados por sistemas embebidos, que recientemente están basados en arquitecturas multinúcleo. El manejo de las tareas de ejecución sobre este tipo de arquitecturas no es una tarea trivial, y una buena planificación de los actores que implementan las funcionalidades puede proporcionar importantes mejoras en términos de eficiencia en el uso de la capacidad de los procesadores y, por ende, del consumo de energía. Por otro lado, el reciente desarrollo del estándar de Codificación de Video Reconfigurable (RVC), permite la reconfiguración de los descodificadores de video. RVC es un estándar flexible y compatible con anteriores codecs desarrollados por MPEG. Esto hace de RVC el estándar ideal para ser incorporado en los nuevos terminales multimedia que se están comercializando. Con el desarrollo de las nuevas versiones del compilador específico para el desarrollo de lenguaje RVC-CAL (Orcc), en el que se basa MPEG RVC, el mapeo estático, para entornos basados en multiprocesador, de los actores que integran un descodificador es posible. Se ha elegido un sistema embebido con un procesador con dos núcleos ARMv7. Esta plataforma nos permitirá llevar a cabo las pruebas de verificación y contraste de los conceptos estudiados en este trabajo, en el sentido del desarrollo de descodificadores de video basados en MPEG RVC y del estudio de la planificación y mapeo estático de los mismos.
Resumo:
In this paper, abstract interpretation algorithms are described for computing the sharmg as well as the freeness information about the run-time instantiations of program variables. An abstract domain is proposed which accurately and concisely represents combined freeness and sharing information for program variables. Abstract unification and all other domain-specific functions for an abstract interpreter working on this domain are presented. These functions are illustrated with an example. The importance of inferring freeness is stressed by showing (1) the central role it plays in non-strict goal independence, and (2) the improved accuracy it brings to the analysis of sharing information when both are computed together. Conversely, it is shown that keeping accurate track of sharing allows more precise inference of freeness, thus resulting in an overall much more powerful abstract interpreter.
Resumo:
Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).
Resumo:
The educational platform Virtual Science Hub (ViSH) has been developed as part of the GLOBAL excursion European project. ViSH (http://vishub.org/) is a portal where teachers and scientist interact to create virtual excursions to science infrastructures. The main motivation behind the project was to connect teachers - and in consequence their students - to scientific institutions and their wide amount of infrastructures and resources they are working with. Thus the idea of a hub was born that would allow the two worlds of scientists and teachers to connect and to innovate science teaching. The core of the ViSH?s concept design is based on virtual excursions, which allow for a number of pedagogical models to be applied. According to our internal definition a virtual excursion is a tour through some digital context by teachers and pupils on a given topic that is attractive and has an educational purpose. Inquiry-based learning, project-based and problem-based learning are the most prominent approaches that a virtual excursion may serve. The domain specific resources and scientific infrastructures currently available on the ViSH are focusing on life sciences, nano-technology, biotechnology, grid and volunteer computing. The virtual excursion approach allows an easy combination of these resources into interdisciplinary teaching scenarios. In addition, social networking features support the users in collaborating and communicating in relation to these excursions and thus create a community of interest for innovative science teaching. The design and development phases were performed following a participatory design approach. An important aspect in this process was to create design partnerships amongst all actors involved, researchers, developers, infrastructure providers, teachers, social scientists, and pedagogical experts early in the project. A joint sense of ownership was created and important changes during the conceptual phase were implemented in the ViSH due to early user feedback. Technology-wise the ViSH is based on the latest web technologies in order to make it cross-platform compatible so that it works on several operative systems such as Windows, Mac or Linux and multi-device accessible, such as desktop, tablet and mobile devices. The platform has been developed in HTML5, the latest standard for web development, assuring that it can run on any modern browser. In addition to social networking features a core element on the ViSH is the virtual excursions editor. It is a web tool that allows teachers and scientists to create rich mash-ups of learning resources provided by the e-Infrastructures (i.e. remote laboratories and live webcams). These rich mash-ups can be presented in either slides or flashcards format. Taking advantage of the web architecture supported, additional powerful components have been integrated like a recommendation engine to provide personalized suggestions about educational content or interesting users and a videoconference tool to enhance real-time collaboration like MashMeTV (http://www.mashme.tv/).
Resumo:
The core concepts, or threads, of Biosystems Engineering (BSEN) are variously understood by those within the discipline, but have never been unequivocally defined due to its early stage of development. This makes communication and teaching difficult compared to other well established engineering subjects. Biosystems Engineering is a field of Engineering which int egrates engineering science and design with applied biological, environmental and agricultural sciences. It represents an evolution of the Agricultural Engineering discipline applied to all living organisms not including biomedical applications. The basic key element for the emerging EU Biosystems Engineering program of studies is to ensure that it offers essential minimum fundamental engine ering knowledge and competences . A core curriculum developed by Erasmus Thematic Networks is used as benchmark for Agr icultural and Biosystems Engineering studies in Europe. The common basis of the core curriculum for the discipline across the Atlantic , including a minimum of competences comprising the Biosystems Engineering core competencies, has been defined by an Atlan tis project , but this needs to be taken further by defining the threads linking courses together. This paper presents a structured approach to define the Threads of BSEN . The definition of the mid-level competences and the associated learning outcomes has been one of the objectives of the Atlantis programme TABE.NET. The mid-level competences and learning outcomes for each of six specializations of BSEN are defined while the domain-specific knowledge to be acquired for each outcome is proposed. Once the proposed definitions are adopted, these threads will be available for global development of the BSEN.
Resumo:
We present a theoretical framework and a case study for reusing the same conceptual and computational methodology for both temporal abstraction and linear (unidimensional) space abstraction, in a domain (evaluation of traffic-control actions) significantly different from the one (clinical medicine) in which the method was originally used. The method, known as knowledge-based temporal abstraction, abstracts high-level concepts and patterns from time-stamped raw data using a formal theory of domain-specific temporal-abstraction knowledge. We applied this method, originally used to interpret time-oriented clinical data, to the domain of traffic control, in which the monitoring task requires linear pattern matching along both space and time. First, we reused the method for creation of unidimensional spatial abstractions over highways, given sensor measurements along each highway measured at the same time point. Second, we reused the method to create temporal abstractions of the traffic behavior, for the same space segments, but during consecutive time points. We defined the corresponding temporal-abstraction and spatial-abstraction domain-specific knowledge. Our results suggest that (1) the knowledge-based temporal-abstraction method is reusable over time and unidimensional space as well as over significantly different domains; (2) the method can be generalized into a knowledge-based linear-abstraction method, which solves tasks requiring abstraction of data along any linear distance measure; and (3) a spatiotemporal-abstraction method can be assembled from two copies of the generalized method and a spatial-decomposition mechanism, and is applicable to tasks requiring abstraction of time-oriented data into meaningful spatiotemporal patterns over a linear, decomposable space, such as traffic over a set of highways.
Resumo:
El software es, cada vez más, una parte muy importante de cualquier circuito electrónico moderno, por ejemplo, un circuito realizado con algún tipo de microprocesador debe incorporar un programa de control por pequeño que sea. Al utilizarse programas informáticos en los circuitos electrónicos modernos, es muy aconsejable, por no decir imprescindible, realizar una serie de pruebas de calidad del diseño realizado. Estas pruebas son cada vez más complicadas de realizar debido al gran tamaño del software empleado en los sistemas actuales, por este motivo, es necesario estructurar una serie de pruebas con el fin de realizar un sistema de calidad, y en algunos casos, un sistema que no presente ningún peligro para el ser humano o el medio ambiente. Esta propuesta consta de la explicación de las técnicas de diseño de pruebas que existen actualmente (por lo menos las más básicas ya que es un tema muy extenso) para realizar el control de calidad del software que puede contener un sistema embebido. Además, muchos circuitos electrónicos, debido a su control o exigencia hardware, es imprescindible que sean manipulados por algún programa que requiera más que un simple microprocesador, me refiero a que se deban controlar por medio de un pequeño programa manipulado por un sistema operativo, ya sea Linux, AIX, Unix, Windows, etc., en este caso el control de calidad se debería llevar a cabo con otras técnicas de diseño. También se puede dar el caso que el circuito electrónico a controlar se deba hacer por medio de una página web. El objetivo es realizar un estudio de las actuales técnicas de diseño de pruebas que están orientadas al desarrollo de sistemas embebidos. ABSTRACT. Software is increasingly a very important part of any modern electronic circuit, for example, a circuit made with some type of microprocessor must incorporate a control program no matter the small it is. When computer programs are used in modern electronic circuits, it is quite advisable if not indispensable to perform a series of quality tests of the design. These tests are becoming more and more difficult to be performed due to the large size of the software used in current systems, which is why it is necessary to structure a series of tests in order to perform a quality system, and in some cases, a system with no danger to humans or to the environment. This proposal consists of an explanation of the techniques used in the tests (at least the most basic ones since it is a very large topic) for quality control of software which may contain an embedded system. In addition, a lot of electronic circuits, due to its control or required hardware, it is essential to be manipulated by a program that requires more than a simple microprocessor, I mean that they must be controlled by means of a small program handled by an operating system, being Linux, AIX, Unix, Windows, etc., in this case the quality control should be carried out with other design techniques. The objective is to study the current test design techniques that are geared to the development of embedded systems. It can also occur that the electronic circuit should be controlled by means of a web page.
Resumo:
En la actualidad se estudia en numerosos campos cómo automatizar distintas tareas ejecutadas por aeronaves con tripulación humana. Estas tareas son en todos los casos muy costosos, debido al gran consumo de combustible, gran coste de adquisición y mantenimiento de la propia aeronave, todo ello sin contar el riesgo para los mismos tripulantes. Como ejemplo de estas tareas se puede incluir la vigilancia policial y fronteriza, revisiones de tendidos de alta tensión, la alerta temprana de incendios forestales y la medición de parámetros contaminantes. El objetivo de este proyecto es el diseño y la construcción de un prototipo electrónico empotrado basado en microcontrolador con núcleo C8051 de Silicon labs, que sea capaz de gobernar una aeronave de radiocontrol de forma transparente, de manera que en un futuro se pueda sustituir el propio aeromodelo, con la modificación de algunos parámetros, para poder incorporar sistemas de video o distintos medios de detección de variables. El prototipo seguirá una ruta confeccionada y transferida como un archivo de texto con un formato determinado que contendrá los datos necesarios para poder navegar mediante GPS. El trabajo con los modelos de motorización térmica (motores de combustión interna tipo glow, en este caso) resulta peligroso debido a la gran energía que son capaces de alcanzar. A fin de mantener la máxima seguridad durante la evolución del proyecto, se ha diseñado un proceso de tres partes independientes que permitan la correcta familiarización de los distintos componentes que se emplearán. Las fases son las siguientes: 1. Test y modelado de todos los componentes mediante pequeños montajes con protoboard de inserción y programas individuales. Se realizará mediante una tarjeta multipropósito que contendrá un microcontrolador similar en características, aunque de menor complejidad, al del prototipo final. 2. Integración de todos los componentes mediante una tarjeta especialmente diseñada que servirá de interfaz entre la tarjeta multipropósito y todo el hardware necesario para el control de un vehículo terrestre de iguales características (actuadores y motorización) al aeromodelo. 3. Diseño de un sistema embebido que concentre todos los subsistemas desarrollados en las fases anteriores y que integre todos los componentes necesarios para el gobierno de una aeronave de ala fija. ABSTRACT. Nowadays, the way of automating different tasks done by manned vehicles is studied. These tasks are any case very expensive, due to large fuel consumption, costs of aircraft buying, without taking into account the risk for human crew. As an example of these tasks, we can include policing or border surveillance, maintenance of high voltage lines, early warning of forest fire and measuring of pollution parameters. The target of this project is the design and construction of an embedded electronic prototype, based on a microcontroller with C8051 core from Silicon labs, and it will be able to controlling an aircraft transparently, in order that in the future the flying model could be changed with the modification of some parameters, and video or any variables detection systems could be added. The prototype will follow a designed and transferred path as an plain text file with a given format, that will contain all the necessary data for GPS navigation. Working with heat engine models (internal combustion engine, glow type, in this case) becomes dangerous due to the large energy that can be able to acquire. In order to keep the maximum safety level during the project evolution a three independent stages process have been designed, this allows familiarizing properly with the parts that will be used. The stages are as follows: 1. Test and modeling of all of the parts by little assemblies with through-hole protoboard and stand alone programs. It will be done with a multipurpose card which contains a microcontroller of similar characteristics, although less complex, of the final prototype. 2. Integrating of all of parts through a dedicated design card that will serve as interface between multipurpose card and all the necessary hardware for controlling a ground vehicle with the same characteristics (actuators and engine) of the flying model. 3. Embedded system designing that contains all the developed subsystems in the previous stages and integrates all the necessary parts for controlling a fixed-wing aircraft.