167 resultados para realtime
Resumo:
The Everglades Depth Estimation Network (EDEN) is an integrated network of realtime water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on grid with 400-square-meter spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to: (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) (U.S. Army Corps of Engineers, 1999). The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades. The first objective of this report is to validate the spatially continuous EDEN water-surface model for the Everglades, Florida developed by Pearlstine et al. (2007) by using an independent field-measured data-set. The second objective is to demonstrate two applications of the EDEN water-surface model: to estimate site-specific ground elevation by using the validated EDEN water-surface model and observed water depth data; and to create water-depth hydrographs for tree islands. We found that there are no statistically significant differences between model-predicted and field-observed water-stage data in both southern Water Conservation Area (WCA) 3A and WCA 3B. Tree island elevations were derived by subtracting field water-depth measurements from the predicted EDEN water-surface. Water-depth hydrographs were then computed by subtracting tree island elevations from the EDEN water stage. Overall, the model is reliable by a root mean square error (RMSE) of 3.31 cm. By region, the RMSE is 2.49 cm and 7.77 cm in WCA 3A and 3B, respectively. This new landscape-scale hydrological model has wide applications for ongoing research and management efforts that are vital to restoration of the Florida Everglades. The accurate, high-resolution hydrological data, generated over broad spatial and temporal scales by the EDEN model, provides a previously missing key to understanding the habitat requirements and linkages among native and invasive populations, including fish, wildlife, wading birds, and plants. The EDEN model is a powerful tool that could be adapted for other ecosystem-scale restoration and management programs worldwide.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
The environmental impact of systems managing large (kg) tritium amount represents a public scrutiny issue for the next coming fusion facilities as ITER and DEMO. Furthermore, potentially new dose limits imposed by international regulations (ICRP) shall impact next coming devices designs and the overall costs of fusion technology deployment. Refined environmental tritium dose impact assessment schemes are then overwhelming. Detailed assessments can be procured from the knowledge of the real boundary conditions of the primary tritium discharge phase into atmosphere (low levels) and into soils. Lagrangian dispersion models using real-time meteorological and topographic data provide a strong refinement. Advance simulation tools are being developed in this sense. The tool integrates a numerical model output records from European Centre for Medium range Weather Forecast (ECMWF) with a lagrangian atmospheric dispersion model (FLEXPART). The composite model ECMWF/FLEXTRA results can be coupled with tritium dose secondary phase pathway assessment tools. Nominal tritium discharge operational reference and selected incidental ITER-like plant systems tritium form source terms have been assumed. The realtime daily data and mesh-refined records together with lagrangian dispersion model approach provide accurate results for doses to population by inhalation or ingestion in the secondary phase
Resumo:
Objective: This research is focused in the creation and validation of a solution to the inverse kinematics problem for a 6 degrees of freedom human upper limb. This system is intended to work within a realtime dysfunctional motion prediction system that allows anticipatory actuation in physical Neurorehabilitation under the assisted-as-needed paradigm. For this purpose, a multilayer perceptron-based and an ANFIS-based solution to the inverse kinematics problem are evaluated. Materials and methods: Both the multilayer perceptron-based and the ANFIS-based inverse kinematics methods have been trained with three-dimensional Cartesian positions corresponding to the end-effector of healthy human upper limbs that execute two different activities of the daily life: "serving water from a jar" and "picking up a bottle". Validation of the proposed methodologies has been performed by a 10 fold cross-validation procedure. Results: Once trained, the systems are able to map 3D positions of the end-effector to the corresponding healthy biomechanical configurations. A high mean correlation coefficient and a low root mean squared error have been found for both the multilayer perceptron and ANFIS-based methods. Conclusions: The obtained results indicate that both systems effectively solve the inverse kinematics problem, but, due to its low computational load, crucial in real-time applications, along with its high performance, a multilayer perceptron-based solution, consisting in 3 input neurons, 1 hidden layer with 3 neurons and 6 output neurons has been considered the most appropriated for the target application.
Resumo:
Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.
Resumo:
The impact of disruptions in JET became even more important with the replacement of the previous Carbon Fiber Composite (CFC) wall with a more fragile full metal ITER-like wall (ILW). The development of robust disruption mitigation systems is crucial for JET (and also for ITER). Moreover, a reliable real-time (RT) disruption predictor is a pre-requisite to any mitigation method. The Advance Predictor Of DISruptions (APODIS) has been installed in the JET Real-Time Data Network (RTDN) for the RT recognition of disruptions. The predictor operates with the new ILW but it has been trained only with discharges belonging to campaigns with the CFC wall. 7 realtime signals are used to characterize the plasma status (disruptive or non-disruptive) at regular intervals of 1 ms. After the first 3 JET ILW campaigns (991 discharges), the success rate of the predictor is 98.36% (alarms are triggered in average 426 ms before the disruptions). The false alarm and missed alarm rates are 0.92% and 1.64%.
Resumo:
The networks need to provide higher speeds than those offered today. For it, considering that in the spectrum radio technologies is the scarcest resource in the development of these technologies and the new developments is essential to maximize the performance of bits per hertz transmitted. Long Term Evolution optimize spectral efficiency modulations with new air interface, and more advanced algorithms radius. These capabilities is the fact that LTE is an IPbased technology that enables end-to-end offer high transmission rates per user and very low latency, ie delay in the response times of the network around only 10 milliseconds, so you can offer any realtime application. LTE is the latest standard in mobile network technology and 3GPP ensure competitiveness in the future, may be considered a technology bridge between 3G networks - current 3.5G and future 4G networks, which are expected to reach speeds of up to 1G . LTE operators provide a simplified architecture but both robust, supporting services on IP technology. The objectives to be achieved through its implementation are ambitious, first users have a wide range of added services like capabilities that currently enjoys with residential broadband access at competitive prices, while the operator will have a network fully IP-based environment, reducing the complexity and cost of the same, which will give operators the opportunity to migrate to LTE directly. A major advantage of LTE is its ability to fuse with existing networks, ensuring interconnection with the same, increasing his current coverage and allowing a data connection established by a user in the environment continue when fade the coverage LTE. Moreover, the operator has the advantage of deploying network gradually, starting initially at areas of high demand for broadband services and expand progressively in line with this. RESUMEN. Las redes necesitan proporcionar velocidades mayores a las ofertadas a día de hoy. Para ello, teniendo en cuenta que en tecnologías radio el espectro es el recurso más escaso, en la evolución de estas tecnologías y en los nuevos desarrollos es esencial maximizar el rendimiento de bits por hercio transmitido. Long Term Evolution optimiza la eficiencia espectral con nuevas modulaciones en la interfaz aire, así como los algoritmos radio más avanzado. A estas capacidades se suma el hecho de que LTE es una tecnología basada en IP de extremo a extremo que permite ofrecer altas velocidades de transmisión por usuario y latencias muy bajas, es decir, retardos en los tiempos de respuesta de la red en torno a sólo 10 milisegundos, por lo que permite ofrecer cualquier tipo de aplicación en tiempo real. LTE es el último estándar en tecnología de redes móviles y asegurará la competitividad de 3GPP en el futuro, pudiendo ser considerada una tecnología puente entre las redes 3G – 3.5G actuales y las futuras redes 4G, de las que se esperan alcanzar velocidades de hasta 1G. LTE proporcionará a las operadoras una arquitectura simplificada pero robusta a la vez, soportando servicios sobre tecnología IP. Los objetivos que se persiguen con su implantación son ambiciosos, por una parte los usuarios dispondrá de una amplia oferta de servicios añadidos con capacidades similares a las que disfruta actualmente con accesos a banda ancha residencial y a precios competitivos, mientras que el operador dispondrá de una red basada en entorno totalmente IP, reduciendo la complejidad y el costo de la misma, lo que dará a las operadoras la oportunidad de migrar a LTE directamente. Una gran ventaja de LTE es su capacidad para fusionarse con las redes existentes, asegurando la interconexión con las mismas, aumentando su actual cobertura y permitiendo que una conexión de datos establecida por un usuario en el entorno LTE continúe cuando la cobertura LTE se desvanezca. Por otra parte el operador tiene la ventaja de desplegar la red LTE de forma gradual, comenzando inicialmente por las áreas de gran demanda de servicios de banda ancha y ampliarla progresivamente en función de ésta.
Resumo:
Este trabajo se centra en la construcción de la parte física del personaje virtual. El desarrollo muestra téecnicas de modelado 3D, cinemática y animación usadas para la creación de personajes virtuales. Se incluye además una implementación que está dividida en: modelado del personaje virtual, creación de un sistema de cinemática inversa y la creación de animaciones utilizando el sistema de cinemática. Primero, crear un modelo 3D exacto al diseño original, segundo, el desarrollo de un sistema de cinemática inversa que resuelva con exactitud las posiciones de las partes articuladas que forman el personaje virtual, y tercero, la creación de animaciones haciendo uso del sistema de cinemática para conseguir animaciones fluidas y depuradas. Como consecuencia, se ha obtenido un componente 3D animado, reutilizable, ampliable, y exportable a otros entornos virtuales. ---ABSTRACT---This article is pointed in the making of the physical part of the virtual character. Development shows modeling 3D, kinematic and animation techniques used for create the virtual character. In addition, an implementation is included, and it is divided in: to model the 3D character, to create an inverse kinematics system, and to create animations using a kinematic system. First, creating an exact 3D model from the original design, second, developing an inverse kinematics system that resolves the positions of the articulated pieces that compose the virtual character, and third, creating animation using the inverse kinematics system to get fluid and refined animations in realtime. As consequence, a 3D animated, reusable, extendable and to other virtual environments exportable component has been obtained.
Resumo:
An important part of human intelligence, both historically and operationally, is our ability to communicate. We learn how to communicate, and maintain our communicative skills, in a society of communicators – a highly effective way to reach and maintain proficiency in this complex skill. Principles that might allow artificial agents to learn language this way are in completely known at present – the multi-dimensional nature of socio-communicative skills are beyond every machine learning framework so far proposed. Our work begins to address the challenge of proposing a way for observation-based machine learning of natural language and communication. Our framework can learn complex communicative skills with minimal up-front knowledge. The system learns by incrementally producing predictive models of causal relationships in observed data, guided by goal-inference and reasoning using forward-inverse models. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime TV-style interview, using multimodal communicative gesture and situated language to talk about recycling of various materials and objects. S1 can learn multimodal complex language and multimodal communicative acts, a vocabulary of 100 words forming natural sentences with relatively complex sentence structure, including manual deictic reference and anaphora. S1 is seeded only with high-level information about goals of the interviewer and interviewee, and a small ontology; no grammar or other information is provided to S1 a priori. The agent learns the pragmatics, semantics, and syntax of complex utterances spoken and gestures from scratch, by observing the humans compare and contrast the cost and pollution related to recycling aluminum cans, glass bottles, newspaper, plastic, and wood. After 20 hours of observation S1 can perform an unscripted TV interview with a human, in the same style, without making mistakes.
Resumo:
Los sistemas de tiempo real tienen un papel cada vez más importante en nuestra sociedad. Constituyen un componente fundamental de los sistemas de control, que a su vez forman parte de diversos sistemas de ingeniería básicos en actividades industriales, militares, de comunicaciones, espaciales y médicas. La planificación de recursos es un problema fundamental en la realización de sistemas de tiempo real. Su objetivo es asignar los recursos disponibles a las tareas de forma que éstas cumplan sus restricciones temporales. Durante bastante tiempo, el estado de la técnica en relación con los métodos de planificación ha sido rudimentario. En la actualidad, los métodos de planificación basados en prioridades han alcanzado un nivel de madurez suficiente para su aplicación en entornos industriales. Sin embargo, hay cuestiones abiertas que pueden dificultar su utilización. El objetivo principal de esta tesis es estudiar los métodos de planificación basados en prioridades, detectar las cuestiones abiertas y desarrollar protocolos, directrices y esquemas de realización práctica que faciliten su empleo en sistemas industriales. Una cuestión abierta es la carencia de esquemas de realización de algunos protocolos con núcleos normalizados. El resultado ha sido el desarrollo de esquemas de realización de tareas periódicas y esporádicas de tiempo real, con detección de fallos de temporización, comunicación entre tareas, cambio de modo de ejecución del sistema y tratamiento de fallos mediante grupos de recuperación. Los esquemas se han codificado en Ada 9X y se proporcionan directrices para analizar la planificabilidad de un sistema desarrollado con esta base. Un resultado adicional ha sido la identificación de la funcionalidad mínima necesaria para desarrollar sistemas de tiempo real con las características enumeradas. La capacidad de adaptación a los cambios del entorno es una característica deseable de los sistemas de tiempo real. Si estos cambios no estaban previstos en la fase de diseño o si hay módulos erróneos, es necesario modificar o incluir algunas tareas. La actualización del sistema se suele realizar estáticamente y su instalación se lleva a cabo después de parar su ejecución. Sin embargo, hay sistemas cuyo funcionamiento no se puede detener sin producir daños materiales o económicos. Una alternativa es diseñar el sistema como un conjunto de unidades que se pueden reemplazar, sin interferir con la ejecución de otras unidades. Para tal fin, se ha desarrollado un protocolo de reemplazamiento dinámico para sistemas de tiempo real crítico y se ha comprobado su compatibilidad con los métodos de planificación basados en prioridades. Finalmente se ha desarrollado un esquema de realización práctica del protocolo.---ABSTRACT---Real-time systems are very important now a days. They have become a relevant issue in the design of control systems, which are a basic component of several engineering systems in industrial, telecommunications, military, spatial and medical applications. Resource scheduling is a central issue in the development of real-time systems. Its purpose is to assign the available resources to the tasks, in such a way that their deadlines are met. Historically, hand-crafted techniques were used to develop real-time systems. Recently, the priority-based scheduling methods have reached a sufficient maturity level to be feasible its extensive use in industrial applications. However, there are some open questions that may decrease its potential usefulness. The main goal of this thesis is to study the priority-based scheduling methods, to identify the remaining open questions and to develop protocols, implementation templates and guidelines that will make more feasible its use in industrial applications. One open question is the lack of implementation schemes, based on commercial realtime kernels, of some of the protocols. POSIX and Ada 9X has served to identify the services usually available. A set of implementation templates for periodic and sporadic tasks have been developed with provisión for timing failure detection, intertask coraraunication, change of the execution mode and failure handling based on recovery groups. Those templates have been coded in Ada 9X. A set of guidelines for checking the schedulability of a system based on them are also provided. An additional result of this work is the identification of the minimal functionality required to develop real-time systems based on priority scheduling methods, with the above characteristics. A desirable feature of real-time systems is their capacity to adapt to changes in the environment, that cannot be entirely predicted during the design, or to misbehaving software modules. The traditional maintenance techniques are performed by stopping the whole system, installing the new application and finally resuming the system execution. However this approach cannot be applied to non-stop systems. An alternative is to design the system as a set of software units that can be dynamically replaced within its operative environment. With this goal in mind, a dynamic replacement protocol for hard real-time systems has been defined. Its compatibility with priority-based scheduling methods has been proved. Finally, a execution témplate of the protocol has been implemented.
Resumo:
An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.
Resumo:
O Monitoramento Acústico Passivo (PAM) submarino refere-se ao uso de sistemas de escuta e gravação subaquática, com o intuito de detectar, monitorar e identificar fontes sonoras através das ondas de pressão que elas produzem. Se diz que é passivo já que tais sistemas unicamente ouvem, sem perturbam o meio ambiente acústico existente, diferentemente de ativos, como os sonares. O PAM submarino tem diversas áreas de aplicação, como em sistemas de vigilância militar, seguridade portuária, monitoramento ambiental, desenvolvimento de índices de densidade populacional de espécies, identificação de espécies, etc. Tecnologia nacional nesta área é praticamente inexistente apesar da sua importância. Neste contexto, o presente trabalho visa contribuir com o desenvolvimento de tecnologia nacional no tema através da concepção, construção e operação de equipamento autônomo de PAM e de métodos de processamento de sinais para detecção automatizada de eventos acústicos submarinos. Foi desenvolvido um equipamento, nomeado OceanPod, que possui características como baixo custo de fabrica¸c~ao, flexibilidade e facilidade de configuração e uso, voltado para a pesquisa científica, industrial e para controle ambiental. Vários protótipos desse equipamento foram construídos e utilizados em missões no mar. Essas jornadas de monitoramento permitiram iniciar a criação de um banco de dados acústico, o qual permitiu fornecer a matéria prima para o teste de detectores de eventos acústicos automatizados e em tempo real. Adicionalmente também é proposto um novo método de detecção-identificação de eventos acústicos, baseado em análise estatística da representação tempo-frequência dos sinais acústicos. Este novo método foi testado na detecção de cetáceos, presentes no banco de dados gerado pelas missões de monitoramento.
Resumo:
The low complexity of IIR adaptive filters (AFs) is specially appealing to realtime applications but some drawbacks have been preventing their widespread use so far. For gradient based IIR AFs, adverse operational conditions cause convergence problems in system identification scenarios: underdamped and clustered poles, undermodelling or non-white input signals lead to error surfaces where the adaptation nearly stops on large plateaus or get stuck at sub-optimal local minima that can not be identified as such a priori. Furthermore, the non-stationarity in the input regressor brought by the filter recursivity and the approximations made by the update rules of the stochastic gradient algorithms constrain the learning step size to small values, causing slow convergence. In this work, we propose IIR performance enhancement strategies based on hybrid combinations of AFs that achieve higher convergence rates than ordinary IIR AFs while keeping the stability.
Resumo:
Teledermatology can provide both accurate and reliable specialist care at a distance. This article reviews current data on the quality of care that teledermatology provides, as well as the societal cost benefits involved in the implementation of the technique. Teledermatology is most suited to patients unable to access specialist. services for geographical or social reasons. Patients are generally satisfied with the overall care that teledermatology provides. Real-time teledermatology is more expensive than conventional care for health services. However, significant savings can be expected from the patient's perspective due to reduced travel. Appropriate patient selection, improved technology and adequate clinical workloads may improve both the quality and cost effectiveness of this service.
Resumo:
Telemedicine is the delivery of health care and the exchange of health-care information across distances. It is not a technology or a separate or new branch of medicine. Telemedicine episodes may be classified on the basis of: (I) the interaction between the client and the expert (i.e. realtime or prerecorded), and (2) the type of information being transmitted (e.g. text, audio, video). Much of the telemedicine which is now practised is performed in industrialized countries, such as the USA, but there is increasing interest in the use of telemedicine in developing countries. There are basically two conditions under which telemedicine should be considered: (I) when there is no alternative (e.g. in emergencies in remote environments), and (2) when it is better than existing conventional services (e.g. teleradiology for rural hospitals). For example, telemedicine can be expected to improve equity of access to health care, the quality of that care, and the efficiency by which it is delivered. Research in telemedicine increased steadily in the late 1990s, although the quality of the research could be improved - there have been few randomized controlled trials to date.