49 resultados para application monitoring
Resumo:
This document presents an innovative, formal educational initiative that is aimed at enhancing the development of engineering students' specific competences. The subject of project management is the common theoretical and practical framework that articulates an experience that is carried out by multidisciplinary groups. Full utilization of Web 2.0 platforms and Project Based Learning constitutes the applied methodology. More specifically, this study focuses on monitoring communication competence when working in virtual environments, providing an ad-hoc rubric as a final result.
Resumo:
In this study, the very first geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of Hontomín-Huermeces (Burgos, Spain) are presented and discussed. Hontomín-Huermeces was selected as a pilot site for the injection of pure (>99 %) CO2. Injection and monitoring wells are planned to be drilled close to 6 oil wells completed in the 1980’s. Stratigraphical logs indicate the presence of a confined saline aquifer at the depth of about 1,500 m into which less than 100,000 tons of liquid CO2 will be injected, possibly starting in 2013. The chemical and isotopic features of the spring waters suggest the occurrence of a shallow aquifer having a Ca2+(Mg2+)-HCO3- composition, relatively low salinity (Total Dissolved Solids _800 mg/L) and a meteoric isotopic signature. Some spring waters close to the oil wells are characterized by relatively high concentrations of NO3- (up to 123 mg/L), unequivocally indicating anthropogenic contamination that adds to the main water-rock interaction processes. The latter can be referred to Ca-Mg-carbonate and, at a minor extent, Al-silicate dissolution, being the outcropping sedimentary rocks characterized by Palaeozoic to Quaternary rocks. Anomalous concentrations of Cl-, SO42-, As, B and Ba were measured in two springs discharging a few hundreds meters from the oil wells and in the Rio Ubierna, possibly indicative of mixing processes, although at very low extent, between deep and shallow aquifers. Gases dissolved in spring waters show relatively high concentrations of atmospheric species, such as N2, O2 and Ar, and isotopically negative CO2 (<-17.7 h V-PDB), likely related to a biogenic source, possibly masking any contribution related to a deep source. The geochemical and isotopic data of this study are of particular importance when a monitoring program will be established to verify whether CO2 leakages, induced by the injection of this greenhouse gas, may affect the quality of the waters of the shallow Hontomín-Huermeces hydrological circuit. In this respect, carbonate chemistry, the isotopic carbon of dissolved CO2 and TDIC (Total Dissolved Inorganic Carbon) and selected trace elements can be considered as useful parameters to trace the migration of the injected CO2 into near-surface environments.
Resumo:
A notorious advantage of wireless transmission is a significant reduction and simplification in wiring and harness. There are a lot of applications of wireless systems, but in many occasions sensor nodes require a specific housing to protect the electronics from hush environmental conditions. Nowadays the information is scarce and nonspecific on the dynamic behaviour of WSN and RFID. Therefore the purpose of this study is to evaluate the dynamic behaviour of the sensors. A series of trials were designed and performed covering temperature steps between cold room (5 °C), room temperature (23 °C) and heated environment (35 °C). As sensor nodes: three Crossbow motes, a surface mounted Nlaza module (with sensor Sensirion located on the motherboard), an aerial mounted Nlaza where the Sensirion sensor stayed at the end of a cable), and four tags RFID Turbo Tag (T700 model with and without housing), and 702-B (with and without housing). To assess the dynamic behaviour a first order response approach is used and fitted with dedicated optimization tools programmed in Matlab that allow extracting the time response (?) and corresponding determination coefficient (r2) with regard to experimental data. The shorter response time (20.9 s) is found for the uncoated T 700 tag which encapsulated version provides a significantly higher response (107.2 s). The highest ? corresponds to the Crossbow modules (144.4 s), followed by the surface mounted Nlaza module (288.1 s), while the module with aerial mounted sensor gives a response certainly close above to the T700 without coating (42.8 s). As a conclusion, the dynamic response of temperature sensors within wireless and RFID nodes is dramatically influenced by the way they are housed (to protect them from the environment) as well as by the heat released by the node electronics itself; its characterization is basic to allow monitoring of high rate temperature changes and to certify the cold chain. Besides the time to rise and to recover is significantly different being mostly higher for the latter than for the former.
Resumo:
There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.
Resumo:
Tool wear detection is a key issue for tool condition monitoring. The maximization of useful tool life is frequently related with the optimization of machining processes. This paper presents two model-based approaches for tool wear monitoring on the basis of neuro-fuzzy techniques. The use of a neuro-fuzzy hybridization to design a tool wear monitoring system is aiming at exploiting the synergy of neural networks and fuzzy logic, by combining human reasoning with learning and connectionist structure. The turning process that is a well-known machining process is selected for this case study. A four-input (i.e., time, cutting forces, vibrations and acoustic emissions signals) single-output (tool wear rate) model is designed and implemented on the basis of three neuro-fuzzy approaches (inductive, transductive and evolving neuro-fuzzy systems). The tool wear model is then used for monitoring the turning process. The comparative study demonstrates that the transductive neuro-fuzzy model provides better error-based performance indices for detecting tool wear than the inductive neuro-fuzzy model and than the evolving neuro-fuzzy model.
Resumo:
In 1998 the EXPORT team monitored microlensing event light curves using a charge-coupled device (CCD) camera on the IAC 0.8-m telescope on Tenerife to evaluate the prospect of using northern telescopes to find microlens anomalies that reveal planets orbiting the lens stars. The high airmass and more limited time available for observations of Galactic bulge sources make a northern site less favourable for microlensing planet searches. However, there are potentially a large number of northern 1-m class telescopes that could devote a few hours per night to monitor ongoing microlensing events. Our IAC observations indicate that accuracies sufficient to detect planets can be achieved despite the higher airmass.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Esta tesis estudia la monitorización y gestión de la Calidad de Experiencia (QoE) en los servicios de distribución de vídeo sobre IP. Aborda el problema de cómo prevenir, detectar, medir y reaccionar a las degradaciones de la QoE desde la perspectiva de un proveedor de servicios: la solución debe ser escalable para una red IP extensa que entregue flujos individuales a miles de usuarios simultáneamente. La solución de monitorización propuesta se ha denominado QuEM(Qualitative Experience Monitoring, o Monitorización Cualitativa de la Experiencia). Se basa en la detección de las degradaciones de la calidad de servicio de red (pérdidas de paquetes, disminuciones abruptas del ancho de banda...) e inferir de cada una una descripción cualitativa de su efecto en la Calidad de Experiencia percibida (silencios, defectos en el vídeo...). Este análisis se apoya en la información de transporte y de la capa de abstracción de red de los flujos codificados, y permite caracterizar los defectos más relevantes que se observan en este tipo de servicios: congelaciones, efecto de “cuadros”, silencios, pérdida de calidad del vídeo, retardos e interrupciones en el servicio. Los resultados se han validado mediante pruebas de calidad subjetiva. La metodología usada en esas pruebas se ha desarrollado a su vez para imitar lo más posible las condiciones de visualización de un usuario de este tipo de servicios: los defectos que se evalúan se introducen de forma aleatoria en medio de una secuencia de vídeo continua. Se han propuesto también algunas aplicaciones basadas en la solución de monitorización: un sistema de protección desigual frente a errores que ofrece más protección a las partes del vídeo más sensibles a pérdidas, una solución para minimizar el impacto de la interrupción de la descarga de segmentos de Streaming Adaptativo sobre HTTP, y un sistema de cifrado selectivo que encripta únicamente las partes del vídeo más sensibles. También se ha presentado una solución de cambio rápido de canal, así como el análisis de la aplicabilidad de los resultados anteriores a un escenario de vídeo en 3D. ABSTRACT This thesis proposes a comprehensive approach to the monitoring and management of Quality of Experience (QoE) in multimedia delivery services over IP. It addresses the problem of preventing, detecting, measuring, and reacting to QoE degradations, under the constraints of a service provider: the solution must scale for a wide IP network delivering individual media streams to thousands of users. The solution proposed for the monitoring is called QuEM (Qualitative Experience Monitoring). It is based on the detection of degradations in the network Quality of Service (packet losses, bandwidth drops...) and the mapping of each degradation event to a qualitative description of its effect in the perceived Quality of Experience (audio mutes, video artifacts...). This mapping is based on the analysis of the transport and Network Abstraction Layer information of the coded stream, and allows a good characterization of the most relevant defects that exist in this kind of services: screen freezing, macroblocking, audio mutes, video quality drops, delay issues, and service outages. The results have been validated by subjective quality assessment tests. The methodology used for those test has also been designed to mimic as much as possible the conditions of a real user of those services: the impairments to evaluate are introduced randomly in the middle of a continuous video stream. Based on the monitoring solution, several applications have been proposed as well: an unequal error protection system which provides higher protection to the parts of the stream which are more critical for the QoE, a solution which applies the same principles to minimize the impact of incomplete segment downloads in HTTP Adaptive Streaming, and a selective scrambling algorithm which ciphers only the most sensitive parts of the media stream. A fast channel change application is also presented, as well as a discussion about how to apply the previous results and concepts in a 3D video scenario.
Wireless measurement system for structural health monitoring with high time synchronization accuracy
Resumo:
Structural health monitoring (SHM) systems have excellent potential to improve the regular operation and maintenance of structures. Wireless networks (WNs) have been used to avoid the high cost of traditional generic wired systems. The most important limitation of SHM wireless systems is time-synchronization accuracy, scalability, and reliability. A complete wireless system for structural identification under environmental load is designed, implemented, deployed, and tested on three different real bridges. Our contribution ranges from the hardware to the graphical front end. System goal is to avoid the main limitations of WNs for SHM particularly in regard to reliability, scalability, and synchronization. We reduce spatial jitter to 125 ns, far below the 120 μs required for high-precision acquisition systems and much better than the 10-μs current solutions, without adding complexity. The system is scalable to a large number of nodes to allow for dense sensor coverage of real-world structures, only limited by a compromise between measurement length and mandatory time to obtain the final result. The system addresses a myriad of problems encountered in a real deployment under difficult conditions, rather than a simulation or laboratory test bed.
Resumo:
This paper presents a multi-stage algorithm for the dynamic condition monitoring of a gear. The algorithm provides information referred to the gear status (fault or normal condition) and estimates the mesh stiffness per shaft revolution in case that any abnormality is detected. In the first stage, the analysis of coefficients generated through discrete wavelet transformation (DWT) is proposed as a fault detection and localization tool. The second stage consists in establishing the mesh stiffness reduction associated with local failures by applying a supervised learning mode and coupled with analytical models. To do this, a multi-layer perceptron neural network has been configured using as input features statistical parameters sensitive to torsional stiffness decrease and derived from wavelet transforms of the response signal. The proposed method is applied to the gear condition monitoring and results show that it can update the mesh dynamic properties of the gear on line.
Resumo:
The semiconductor laser diodes that are typically used in applications of optical communications, when working as amplifiers, present under certain conditions optical bistability, which is characterized by abruptly switching between two different output states and an associated hysteresis cycle. This bistable behavior is strongly dependent on the frequency detuning between the frequency of the external optical signal that is injected into the semiconductor laser amplifier and its own emission frequency. This means that small changes in the wavelength of an optical signal applied to a laser amplifier causes relevant changes in the characteristics of its transfer function in terms of the power requirements to achieve bistability and the width of the hysteresis. This strong dependence in the working characteristics of semiconductor laser amplifiers on frequency detuning suggest the use of this kind of devices in optical sensing applications for optical communications, such as the detection of shifts in the emission wavelength of a laser, or detect possible interference between adjacent channels in DWDM (Dense Wavelength Division Multiplexing) optical communication networks
Resumo:
The installers and owners show a growing interest in the follow-up of the performance of their photovoltaic (PV) systems. The owners are requesting reliable sources of information to ensure that their system is functioning properly, and the installers are actively looking for efficient ways of providing them the most useful possible information from the data available. Policy makers are becoming increasingly interested in the knowledge of the real performance of PV systems and the most frequent sources of problems that they suffer to be able to target the identified challenges properly. The scientific and industrial PV community is also requiring an access to massive operational data to pursue the technological improvements further.
Resumo:
The electrical power distribution and commercialization scenario is evolving worldwide, and electricity companies, faced with the challenge of new information requirements, are demanding IT solutions to deal with the smart monitoring of power networks. Two main challenges arise from data management and smart monitoring of power networks: real-time data acquisition and big data processing over short time periods. We present a solution in the form of a system architecture that conveys real time issues and has the capacity for big data management.
Resumo:
The increase in CPU power and screen quality of todays smartphones as well as the availability of high bandwidth wireless networks has enabled high quality mobile videoconfer- encing never seen before. However, adapting to the variety of devices and network conditions that come as a result is still not a trivial issue. In this paper, we present a multiple participant videoconferencing service that adapts to different kind of devices and access networks while providing an stable communication. By combining network quality detection and the use of a multipoint control unit for video mixing and transcoding, desktop, tablet and mobile clients can participate seamlessly. We also describe the cost in terms of bandwidth and CPU usage of this approach in a variety of scenarios.
Resumo:
The deployment of home-based smart health services requires effective and reliable systems for personal and environmental data management. ooperation between Home Area Networks (HAN) and Body Area Networks (BAN) can provide smart systems with ad hoc reasoning information to support health care. This paper details the implementation of an architecture that integrates BAN, HAN and intelligent agents to manage physiological and environmental data to proactively detect risk situations at the digital home. The system monitors dynamic situations and timely adjusts its behavior to detect user risks concerning to health. Thus, this work provides a reasoning framework to infer appropriate solutions in cases of health risk episodes. Proposed smart health monitoring approach integrates complex reasoning according to home environment, user profile and physiological parameters defined by a scalable ontology. As a result, health care demands can be detected to activate adequate internal mechanisms and report public health services for requested actions.