935 resultados para Human Machine Interface


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The traditional processes for treatment of hazardous waste are questionable for it generates other wastes that adversely affect people s health. As an attempt to minimize these problems, it was developed a system for treatment of hazardous waste by thermal plasma, a more appropriate technology since it produces high temperatures, preventing the formation of toxic pollutants to human beings. The present work brings out a solution of automation for this plant. The system has local and remote monitoring resources to ensure the operators security as well as the process itself. A special attention was given to the control of the main reactor temperature of the plant as it is the place where the main processing occurs and because it presents a complex mathematical model. To this, it was employed cascaded controls based on Fuzzy logic. A process computer, with a particular man-machine interface (MMI), provides information and controls of the plant to the operator, including by Internet. A compact PLC module is in charge of the central element of management automation and plant control which receives information from sensors, and sends it to the MMI

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent statistics have demonstrated that two of the most important causes of failures of the UAVs (Uninhabited Aerial Vehicle) missions are related to the low level of decisional autonomy of vehicles and to the man machine interface. Therefore, a relevant issue is to design a display/controls architecture which allows the efficient interaction between the operator and the remote vehicle and to develop a level of automation which allows the vehicle the decision about change in mission. The research presented in this paper focuses on a modular man-machine interface simulator for the UAV control, which simulates UAV missions, developed to experiment solution to this problem. The main components of the simulator are an advanced interface and a block defined automation, which comprehend an algorithm that implements the level of automation of the system. The simulator has been designed and developed following a user-centred design approach in order to take into account the operator’s needs in the communication with the vehicle. The level of automation has been developed following the supervisory control theory which says that the human became a supervisor who sends high level commands, such as part of mission, target, constraints, in then-rule, while the vehicle receives, comprehends and translates such commands into detailed action such as routes or action on the control system. In order to allow the vehicle to calculate and recalculate the safe and efficient route, in term of distance, time and fuel a 3D planning algorithm has been developed. It is based on considering UASs representative of real world systems as objects moving in a virtual environment (terrain, obstacles, and no fly zones) which replicates the airspace. Original obstacle avoidance strategies have been conceived in order to generate mission planes which are consistent with flight rules and with the vehicle performance constraints. The interface is based on a touch screen, used to send high level commands to the vehicle, and a 3D Virtual Display which provides a stereoscopic and augmented visualization of the complex scenario in which the vehicle operates. Furthermore, it is provided with an audio feedback message generator. Simulation tests have been conducted with pilot trainers to evaluate the reliability of the algorithm and the effectiveness and efficiency of the interface in supporting the operator in the supervision of an UAV mission. Results have revealed that the planning algorithm calculate very efficient routes in few seconds, an adequate level of workload is required to command the vehicle and that the 3D based interface provides the operator with a good sense of presence and enhances his awareness of the mission scenario and of the vehicle under his control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staff

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Actualmente la detección del rostro humano es un tema difícil debido a varios parámetros implicados. Llega a ser de interés cada vez mayor en diversos campos de aplicaciones como en la identificación personal, la interface hombre-máquina, etc. La mayoría de las imágenes del rostro contienen un fondo que se debe eliminar/discriminar para poder así detectar el rostro humano. Así, este proyecto trata el diseño y la implementación de un sistema de detección facial humana, como el primer paso en el proceso, dejando abierto el camino, para en un posible futuro, ampliar este proyecto al siguiente paso, que sería, el Reconocimiento Facial, tema que no trataremos aquí. En la literatura científica, uno de los trabajos más importantes de detección de rostros en tiempo real es el algoritmo de Viola and Jones, que ha sido tras su uso y con las librerías de Open CV, el algoritmo elegido para el desarrollo de este proyecto. A continuación explicaré un breve resumen sobre el funcionamiento de mi aplicación. Mi aplicación puede capturar video en tiempo real y reconocer el rostro que la Webcam captura frente al resto de objetos que se pueden visualizar a través de ella. Para saber que el rostro es detectado, éste es recuadrado en su totalidad y seguido si este mueve. A su vez, si el usuario lo desea, puede guardar la imagen que la cámara esté mostrando, pudiéndola almacenar en cualquier directorio del PC. Además, incluí la opción de poder detectar el rostro humano sobre una imagen fija, cualquiera que tengamos guardada en nuestro PC, siendo mostradas el número de caras detectadas y pudiendo visualizarlas sucesivamente cuantas veces queramos. Para todo ello como bien he mencionado antes, el algoritmo usado para la detección facial es el de Viola and Jones. Este algoritmo se basa en el escaneo de toda la superficie de la imagen en busca del rostro humano, para ello, primero la imagen se transforma a escala de grises y luego se analiza dicha imagen, mostrando como resultado el rostro encuadrado. ABSTRACT Currently the detection of human face is a difficult issue due to various parameters involved. Becomes of increasing interest in various fields of applications such as personal identification, the man-machine interface, etc. Most of the face images contain a fund to be removed / discriminate in order to detect the human face. Thus, this project is the design and implementation of a human face detection system, as the first step in the process, leaving the way open for a possible future, extend this project to the next step would be, Facial Recognition , a topic not covered here. In the literature, one of the most important face detection in real time is the algorithm of Viola and Jones, who has been after use with Open CV libraries, the algorithm chosen for the development of this project. I will explain a brief summary of the performance of my application. My application can capture video in real time and recognize the face that the Webcam Capture compared to other objects that can be viewed through it. To know that the face is detected, it is fully boxed and followed if this move. In turn, if the user may want to save the image that the camera is showing, could store in any directory on your PC. I also included the option to detect the human face on a still image, whatever we have stored in your PC, being shown the number of faces detected and can view them on more times. For all as well I mentioned before, the algorithm used for face detection is that of Viola and Jones. This algorithm is based on scanning the entire surface of the image for the human face, for this, first the image is converted to gray-scale and then analyzed the image, showing results in the face framed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Problems for intellectualisation for man-machine interface and methods of self-organization for network control in multi-agent infotelecommunication systems have been discussed. Architecture and principles for construction of network and neural agents for telecommunication systems of new generation have been suggested. Methods for adaptive and multi-agent routing for information flows by requests of external agents- users of global telecommunication systems and computer networks have been described.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este artículo analiza el papel de los drones en la emergencia de nuevas formas de participación política e impugnación del poder por parte de colectivos sociales. El artículo plantea una lectura feminista de los drones como ciborgs (humanos-máquinas) para explorar las agencias distribuidas entre actores humanos y no humanos con el propósito de visibilizar las relaciones de poder y analizar la configuración de contra-realidades. Se presentan ocho casos de colectivos sociales que, con la ayuda de un dron, disputan el poder de gobiernos, empresas transnacionales además de desempeñar innovadoras intervenciones públicas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Robots are ever increasing in a variety of different workplaces providing an array of benefits such alternative solutions to traditional human labor. While developing fully autonomous robots is the ultimate goal in many robotic applications the reality is that there still exist many situationswere robots require some level of teleoperation in order to achieve assigned goals especially when deployed in non-deterministic environments. For instance teleoperation is commonly used in areas such as search and rescue, bomb disposal and exploration of inaccessible or harsh terrain. This is due to a range of factors such as the lack of ability for robots to quickly and reliably navigate unknown environments or provide high-level decision making especially intime critical tasks. To provide an adequate solution for such situations human-in-the-loop control is required. When developing human-in-the-loop control it is important to take advantage of the complimentary skill-sets that both humans and robots share. For example robots can performrapid calculations, provide accurate measurements through hardware such as sensors and store large amounts of data while humans provide experience, intuition, risk management and complex decision making capabilities. Shared autonomy is the concept of building robotic systems that take advantage of these complementary skills-sets to provide a robust an efficient robotic solution. While the requirement of human-in-the-loop control exists Human Machine Interaction (HMI) remains an important research topic especially the area of User Interface (UI) design.In order to provide operators with an effective teleoperation system it is important that the interface is intuitive and dynamic while also achieving a high level of immersion. Recent advancements in virtual and augmented reality hardware is giving rise to innovative HMI systems. Interactive hardware such as Microsoft Kinect, leap motion, Oculus Rift, Samsung Gear VR and even CAVE Automatic Virtual Environments [1] are providing vast improvements over traditional user interface designs such as the experimental web browser JanusVR [2]. This combined with the introduction of standardized robot frameworks such as ROS and Webots [3] that now support a large number of different robots provides an opportunity to develop a universal UI for teleoperation control to improve operator efficiency while reducing teleoperation training.This research introduces the concept of a dynamic virtual workspace for teleoperation of heterogeneous robots in non-deterministic environments that require human-in-the-loop control. The system first identifies the connected robots through the use kinematic information then determines its network capabilities such as latency and bandwidth. Given the robot type and network capabilities the system can then provide the operator with available teleoperation modes such as pick and place control or waypoint navigation while also allowing them to manipulate the virtual workspace layout to provide information from onboard camera’s or sensors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we discuss how a network of sensors and robots can cooperate to solve important robotics problems such as localization and navigation. We use a robot to localize sensor nodes, and we then use these localized nodes to navigate robots and humans through the sensorized space. We explore these novel ideas with results from two large-scale sensor network and robot experiments involving 50 motes, two types of flying robot: an autonomous helicopter and a large indoor cable array robot, and a human-network interface. We present the distributed algorithms for localization, geographic routing, path definition and incremental navigation. We also describe how a human can be guided using a simple hand-held device that interfaces to this same environmental infrastructure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is an increased interested in Uninhabited Aerial Vehicle (UAV) operations and research into advanced methods for commanding and controlling multiple heterogeneous UAVs. Research into areas of supervisory control has rapidly increased. Past research has investigated various approaches of autonomous control and operator limitation to improve mission commanders' Situation Awareness (SA) and cognitive workload. The aim of this paper is to address this challenge through a visualisation framework of UAV information constructed from Information Abstraction (IA). This paper presents the concept and process of IA, and the visualisation framework (constructed using IA), the concept associated with the Level Of Detail (LOD) indexing method, the visualisation of an example of the framework. Experiments will test the hypothesis that, the operator will be able to achieve increased SA and reduced cognitive load with the proposed framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Phenomenography is a research approach devised to allow the investigation of varying ways in which people experience aspects of their world. Whilst growing attention is being paid to interpretative research in LIS, it is not always clear how the outcomes of such research can be used in practice. This article explores the potential contribution of phenomenography in advancing the application of phenomenological and hermeneutic frameworks to LIS theory, research and practice. In phenomenography we find a research toll which in revealing variation, uncovers everyday understandings of phenomena and provides outcomes which are readily applicable to professional practice. THe outcomes may be used in human computer interface design, enhancement, implementation and training, in the design and evaluation of services, and in education and training for both end users and information professionals. A proposed research territory for phenomenography in LIS includes investigating qualitative variation in the experienced meaning of: 1) information and its role in society 2) LIS concepts and principles 3) LIS processes and; 4) LIS elements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various tools have been developed to assist designers in making interfaces easier to use although none yet offer a complete solution. Through previous work we have established that intuitive interaction is based on past experience. From this we have developed theory around intuitive interaction, a continuum and a conceptual tool for intuitive use. We then trialled our tool. Firstly, one designer used the tool to design a camera. Secondly, seven groups of postgraduate students re-designed various products using our tool. We then chose one of these - a microwave – and prototyped the new and original microwave interfaces on a touchscreen. We tested them on three different age groups. We found that the new design was more intuitive and rated by participants as more familiar. Therefore, design interventions based on our intuitive interaction theory can work. Work is ongoing to develop the tool further.