878 resultados para Condition Monitoring, Asset Management, Maintenance, Low Speed Machinery, Diagnostics
Resumo:
In recent years, there has been an increasing interest in the adoption of emerging ubiquitous sensor network (USN) technologies for instrumentation within a variety of sustainability systems. USN is emerging as a sensing paradigm that is being newly considered by the sustainability management field as an alternative to traditional tethered monitoring systems. Researchers have been discovering that USN is an exciting technology that should not be viewed simply as a substitute for traditional tethered monitoring systems. In this study, we investigate how a movement monitoring measurement system of a complex building is developed as a research environment for USN and related decision-supportive technologies. To address the apparent danger of building movement, agent-mediated communication concepts have been designed to autonomously manage large volumes of exchanged information. In this study, we additionally detail the design of the proposed system, including its principles, data processing algorithms, system architecture, and user interface specifics. Results of the test and case study demonstrate the effectiveness of the USN-based data acquisition system for real-time monitoring of movement operations.
Resumo:
The cost of a road construction over its service life is a function of design, quality of construction as well as maintenance strategies and operations. An optimal life-cycle cost for a road requires evaluations of the above mentioned components. Unfortunately, road designers often neglect a very important aspect, namely, the possibility to perform future maintenance activities. Focus is mainly directed towards other aspects such as investment costs, traffic safety, aesthetic appearance, regional development and environmental effects. This doctoral thesis presents the results of a research project aimed to increase consideration of road maintenance aspects in the planning and design process. The following subgoals were established: Identify the obstacles that prevent adequate consideration of future maintenance during the road planning and design process; and Examine optimisation of life-cycle costs as an approach towards increased efficiency during the road planning and design process. The research project started with a literature review aimed at evaluating the extent to which maintenance aspects are considered during road planning and design as an improvement potential for maintenance efficiency. Efforts made by road authorities to increase efficiency, especially maintenance efficiency, were evaluated. The results indicated that all the evaluated efforts had one thing in common, namely ignorance of the interrelationship between geometrical road design and maintenance as an effective tool to increase maintenance efficiency. Focus has mainly been on improving operating practises and maintenance procedures. This fact might also explain why some efforts to increase maintenance efficiency have been less successful. An investigation was conducted to identify the problems and difficulties, which obstruct due consideration of maintainability during the road planning and design process. A method called “Change Analysis” was used to analyse data collected during interviews with experts in road design and maintenance. The study indicated a complex combination of problems which result in inadequate consideration of maintenance aspects when planning and designing roads. The identified problems were classified into six categories: insufficient consulting, insufficient knowledge, regulations and specifications without consideration of maintenance aspects, insufficient planning and design activities, inadequate organisation and demands from other authorities. Several urgent needs for changes to eliminate these problems were identified. One of the problems identified in the above mentioned study as an obstacle for due consideration of maintenance aspects during road design was the absence of a model for calculating life-cycle costs for roads. Because of this lack of knowledge, the research project focused on implementing a new approach for calculating and analysing life-cycle costs for roads with emphasis on the relationship between road design and road maintainability. Road barriers were chosen as an example. The ambition is to develop this approach to cover other road components at a later stage. A study was conducted to quantify repair rates for barriers and associated repair costs as one of the major maintenance costs for road barriers. A method called “Case Study Research Method” was used to analyse the effect of several factors on barrier repairs costs, such as barrier type, road type, posted speed and seasonal effect. The analyses were based on documented data associated with 1625 repairs conducted in four different geographical regions in Sweden during 2006. A model for calculation of average repair costs per vehicle kilometres was created. Significant differences in the barrier repair costs were found between the studied barrier types. In another study, the injuries associated with road barrier collisions and the corresponding influencing factors were analysed. The analyses in this study were based on documented data from actual barrier collisions between 2005 and 2008 in Sweden. The result was used to calculate the cost for injuries associated with barrier collisions as a part of the socio-economic cost for road barriers. The results showed significant differences in the number of injuries associated with collisions with different barrier types. To calculate and analyse life-cycle costs for road barriers a new approach was developed based on a method called “Activity-based Life-cycle Costing”. By modelling uncertainties, the presented approach gives a possibility to identify and analyse factors crucial for optimising life-cycle costs. The study showed a great potential to increase road maintenance efficiency through road design. It also showed that road components with low investment costs might not be the best choice when including maintenance and socio-economic aspects. The difficulties and problems faced during the collection of data for calculating life-cycle costs for road barriers indicated a great need for improving current data collecting and archiving procedures. The research focused on Swedish road planning and design. However, the conclusions can be applied to other Nordic countries, where weather conditions and road design practices are similar. The general methodological approaches used in this research project may be applied also to other studies.
Resumo:
A challenge for the clinical management of Parkinson's disease (PD) is the large within- and between-patient variability in symptom profiles as well as the emergence of motor complications which represent a significant source of disability in patients. This thesis deals with the development and evaluation of methods and systems for supporting the management of PD by using repeated measures, consisting of subjective assessments of symptoms and objective assessments of motor function through fine motor tests (spirography and tapping), collected by means of a telemetry touch screen device. One aim of the thesis was to develop methods for objective quantification and analysis of the severity of motor impairments being represented in spiral drawings and tapping results. This was accomplished by first quantifying the digitized movement data with time series analysis and then using them in data-driven modelling for automating the process of assessment of symptom severity. The objective measures were then analysed with respect to subjective assessments of motor conditions. Another aim was to develop a method for providing comparable information content as clinical rating scales by combining subjective and objective measures into composite scores, using time series analysis and data-driven methods. The scores represent six symptom dimensions and an overall test score for reflecting the global health condition of the patient. In addition, the thesis presents the development of a web-based system for providing a visual representation of symptoms over time allowing clinicians to remotely monitor the symptom profiles of their patients. The quality of the methods was assessed by reporting different metrics of validity, reliability and sensitivity to treatment interventions and natural PD progression over time. Results from two studies demonstrated that the methods developed for the fine motor tests had good metrics indicating that they are appropriate to quantitatively and objectively assess the severity of motor impairments of PD patients. The fine motor tests captured different symptoms; spiral drawing impairment and tapping accuracy related to dyskinesias (involuntary movements) whereas tapping speed related to bradykinesia (slowness of movements). A longitudinal data analysis indicated that the six symptom dimensions and the overall test score contained important elements of information of the clinical scales and can be used to measure effects of PD treatment interventions and disease progression. A usability evaluation of the web-based system showed that the information presented in the system was comparable to qualitative clinical observations and the system was recognized as a tool that will assist in the management of patients.
Resumo:
In this paper we present a versatile and easy-to-assemble measurement system for structural health monitoring (SHM) based on the electromechanical impedance (EMI) technique. The hardware of the proposed system consists only of a common data acquisition (DAQ) device with external resistors and allows real-time data acquisition from multiple sensors. Besides the low-cost compared to conventional impedance analyzers, the hardware and the software are simple and easier to implement than other measurement systems that have been recently proposed.
Resumo:
This edition of the Bulletin deals with road maintenance, funds and fund management. Among other things, it emphasizes, the need to manage road funds in accordance with clear performance rules which seek to minimize maintenance costs and ensure that the road network is maintained in an appropriate condition.
Resumo:
Il territorio italiano presenta una grandissima ricchezza nel campo dei Beni Culturali, sia mobili che immobili; si tratta di un patrimonio di grande importanza che va gestito e tutelato nel migliore dei modi e con strumenti adeguati, anche in relazione ai problemi ad esso legati in termini di manutenzione e di salvaguardia dai fattori di rischio a cui può essere esposto. Per una buona conoscenza del Patrimonio Culturale, è fondamentale un’acquisizione preliminare di informazioni condotte in modo sistematico e unitario, che siano diffuse ed organiche, ma anche utili ad una valutazione preventiva e ad una successiva programmazione degli interventi di restauro di tipo conservativo. In questo ambito, l'impiego delle tecniche e tecnologie geomatiche nel campo dei Beni Culturali, può fornire un valido contributo, che va dalla catalogazione e documentazione del bene culturale al suo controllo e monitoraggio. Oggigiorno il crescente sviluppo di nuove tecnologie digitali, accompagnato dai notevoli passi avanti compiuti dalle discipline della geomatica (in primo luogo topografiche e fotogrammetriche), rende possibile una efficace integrazione tra varie tecniche, favorita anche dalla diffusione di soluzioni per l’interscambio dati e per la comunicazione tra differenti dispositivi. Lo studio oggetto della presente tesi si propone, di approfondire gli aspetti legati all’uso delle tecniche e tecnologie della Geomatica, per mettere in risalto le condizioni di un bene ed il suo stato di degrado. Per la gestione e la salvaguardia di un bene culturale , si presenta il SIT Carta del Rischio che evidenzia le pericolosità legate al patrimonio, e come esse sommate alla vulnerabilità di un singolo bene, contribuiscano all’individuazione del grado di rischio. di approfondire gli aspetti legati all’uso delle tecniche e tecnologie delle Geomatica, per mettere in risalto le condizioni di un bene ed il suo stato di degrado.
Resumo:
Therapeutisches Drug Monitoring (TDM) umfasst die Messung von Medikamentenspiegeln im Blut und stellt die Ergebnisse in Zusammenhang mit dem klinischen Erscheinungsbild der Patienten. Dabei wird angenommen, dass die Konzentrationen im Blut besser mit der Wirkung korrelieren als die Dosis. Dies gilt auch für Antidepressiva. Voraussetzung für eine Therapiesteuerung durch TDM ist die Verfügbarkeit valider Messmethoden im Labor und die korrekte Anwendung des Verfahrens in der Klinik. Ziel dieser Arbeit war es, den Einsatz von TDM für die Depressionsbehandlung zu analysieren und zu verbessern. Im ersten Schritt wurde für das neu zugelassene Antidepressivum Duloxetin eine hochleistungsflüssig-chromatographische (HPLC) Methode mit Säulenschaltung und spektrophotometrischer Detektion etabliert und an Patienten für TDM angewandt. Durch Analyse von 280 Patientenproben wurde herausgefunden, dass Duloxetin-Konzentrationen von 60 bis 120 ng/ml mit gutem klinischen Ansprechen und einem geringen Risiko für Nebenwirkungen einhergingen. Bezüglich seines Interaktionspotentials erwies sich Duloxetin im Vergleich zu anderen Antidepressiva als schwacher Inhibitor des Cytochrom P450 (CYP) Isoenzyms 2D6. Es gab keinen Hinweis auf eine klinische Relevanz. Im zweiten Schritt sollte eine Methode entwickelt werden, mit der möglichst viele unterschiedliche Antidepressiva einschließlich deren Metaboliten messbar sind. Dazu wurde eine flüssigchromatographische Methode (HPLC) mit Ultraviolettspektroskopie (UV) entwickelt, mit der die quantitative Analyse von zehn antidepressiven und zusätzlich zwei antipsychotischen Substanzen innerhalb von 25 Minuten mit ausreichender Präzision und Richtigkeit (beide über 85%) und Sensitivität erlaubte. Durch Säulenschaltung war eine automatisierte Analyse von Blutplasma oder –serum möglich. Störende Matrixbestandteile konnten auf einer Vorsäule ohne vorherige Probenaufbereitung abgetrennt werden. Das kosten- und zeiteffektive Verfahren war eine deutliche Verbesserung für die Bewältigung von Proben im Laboralltag und damit für das TDM von Antidepressiva. Durch Analyse des klinischen Einsatzes von TDM wurden eine Reihe von Anwendungsfehlern identifiziert. Es wurde deshalb versucht, die klinische Anwendung des TDM von Antidepressiva durch die Umstellung von einer weitgehend händischen Dokumentation auf eine elektronische Bearbeitungsweise zu verbessern. Im Rahmen der Arbeit wurde untersucht, welchen Effekt man mit dieser Intervention erzielen konnte. Dazu wurde eine Labor-EDV eingeführt, mit der der Prozess vom Probeneingang bis zur Mitteilung der Messergebnisse auf die Stationen elektronisch erfolgte und die Anwendung von TDM vor und nach der Umstellung untersucht. Die Umstellung fand bei den behandelnden Ärzten gute Akzeptanz. Die Labor-EDV erlaubte eine kumulative Befundabfrage und eine Darstellung des Behandlungsverlaufs jedes einzelnen Patienten inklusive vorhergehender Klinikaufenthalte. Auf die Qualität der Anwendung von TDM hatte die Implementierung des Systems jedoch nur einen geringen Einfluss. Viele Anforderungen waren vor und nach der Einführung der EDV unverändert fehlerhaft, z.B. wurden häufig Messungen vor Erreichen des Steady State angefordert. Die Geschwindigkeit der Bearbeitung der Proben war im Vergleich zur vorher händischen Ausführung unverändert, ebenso die Qualität der Analysen bezüglich Richtigkeit und Präzision. Ausgesprochene Empfehlungen hinsichtlich der Dosierungsstrategie der angeforderten Substanzen wurden häufig nicht beachtet. Verkürzt wurde allerdings die mittlere Latenz, mit der eine Dosisanpassung nach Mitteilung des Laborbefundes erfolgte. Insgesamt ist es mit dieser Arbeit gelungen, einen Beitrag zur Verbesserung des Therapeutischen Drug Monitoring von Antidepressiva zu liefern. In der klinischen Anwendung sind allerdings Interventionen notwendig, um Anwendungsfehler beim TDM von Antidepressiva zu minimieren.
Resumo:
Wireless Sensor Networks (WSNs) offer a new solution for distributed monitoring, processing and communication. First of all, the stringent energy constraints to which sensing nodes are typically subjected. WSNs are often battery powered and placed where it is not possible to recharge or replace batteries. Energy can be harvested from the external environment but it is a limited resource that must be used efficiently. Energy efficiency is a key requirement for a credible WSNs design. From the power source's perspective, aggressive energy management techniques remain the most effective way to prolong the lifetime of a WSN. A new adaptive algorithm will be presented, which minimizes the consumption of wireless sensor nodes in sleep mode, when the power source has to be regulated using DC-DC converters. Another important aspect addressed is the time synchronisation in WSNs. WSNs are used for real-world applications where physical time plays an important role. An innovative low-overhead synchronisation approach will be presented, based on a Temperature Compensation Algorithm (TCA). The last aspect addressed is related to self-powered WSNs with Energy Harvesting (EH) solutions. Wireless sensor nodes with EH require some form of energy storage, which enables systems to continue operating during periods of insufficient environmental energy. However, the size of the energy storage strongly restricts the use of WSNs with EH in real-world applications. A new approach will be presented, which enables computation to be sustained during intermittent power supply. The discussed approaches will be used for real-world WSN applications. The first presented scenario is related to the experience gathered during an European Project (3ENCULT Project), regarding the design and implementation of an innovative network for monitoring heritage buildings. The second scenario is related to the experience with Telecom Italia, regarding the design of smart energy meters for monitoring the usage of household's appliances.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
OBJECTIVE: Maintenance of good walking speed is essential to independent living. People with musculoskeletal disease often have reduced walking speed. We investigated determinants of slower walking, other than musculoskeletal disease, that might provide valuable additional targets for therapy. METHODS: We analyzed data from the Somerset and Avon Survey of Health, a community based survey of people aged over 35 years. A total of 2703 participants who reported hip or knee pain at baseline (1994/1995) were studied, and reassessed in 2002-2003; 1696 were available for followup, and walking speed was tested in 1074. Walking speed (m/s) was used as outcome measure. Baseline characteristics, including comorbidities and socioeconomic factors, were tested for their ability to predict reduced walking speed using multiple linear regression analysis. RESULTS: Age, female sex, and immobility at baseline were predictive of slower walking speed. Other independent risk factors included the presence of cataract, low socioeconomic status, intermittent claudication, and other cardiovascular conditions. Having a cataract was associated with a decrease of 0.10 m/s (95% CI 0.03, 0.16). Those in social class V had a walking speed 0.22 m/s (95% CI 0.126, 0.31) slower than those in social class I. CONCLUSION: Comorbidities, age, female sex, and lower socioeconomic position determine walking speed in people with joint pain. Issues such as poor vision and social-economic disadvantage may add to the effect of musculoskeletal disease, suggesting the need for a holistic approach to management of these patients.
Resumo:
Water springs are the principal source of water for many localities in Central America, including the municipality of Concepción Chiquirichapa in the Western Highlands of Guatemala. Long-term monitoring records are critical for informed water management as well as resource forecasting, though data are scarce and monitoring in low-resource settings presents special challenges. Spring discharge was monitored monthly in six municipal springs during the author’s Peace Corps assignment, from May 2011 to March 2012, and water level height was monitored in two spring boxes over the same time period using automated water-level loggers. The intention of this approach was to circumvent the need for frequent and time-intensive manual measurement by identifying a fixed relationship between discharge and water level. No such relationship was identified, but the water level record reveals that spring yield increased for four months following Tropical Depression 12E in October 2011. This suggests that the relationship between extreme precipitation events and long-term water spring yields in Concepción should be examined further. These limited discharge data also indicate that aquifer baseflow recession and catchment water balance could be successfully characterized if a long-term discharge record were established. This study also presents technical and social considerations for selecting a methodology for spring discharge measurement and highlights the importance of local interest in conducting successful community-based research in intercultural low-resource settings.
Resumo:
INTRODUCTION Out-migration from mountain areas is leaving behind half families and elderly to deal with managing the land alongside daily life challenges. A potential reduction of labour force as well as expertise on cropping practices, maintenance of terraces and irrigation canals, slope stabilization, grazing, forest and other land management practices are further challenged by changing climate conditions and increased environmental threats. An understanding of the resilience of managed land resources in order to enhance adaptation to environmental and socio-economic variability, and evidence of the impact of Sustainable Land Management (SLM) on the mitigation of environmental threats have so far not sufficiently been tackled. The study presented here aims to find out how land management in mountains is being affected by migration in the context of natural hazards and climate change in two study sites, namely Quillacollo District of Bolivia and Panchase area of Western Nepal, and which measures are needed to increase resilience of livelihoods and land management practices. The presentation includes draft results from first field work periods in both sites. A context of high vulnerability According to UNISDR, vulnerability is defined as “the characteristics and circumstances of a community, system or asset that make it susceptible to the damaging effects of a hazard”.Hazards are another threat affecting people’s livelihood in mountainous area. They can be either natural or human induced. Landslides, debris flow and flood are affecting peopleGood land management can significantly reduce occurrence of hazards. In the opposite bad land management or land abandonment can lead to negative consequences on the land, and thus again increase vulnerability of people’s livelihoods. METHODS The study integrates bio-physical and socio-economic data through a case study as well as a mapping approach. From the social sciences, well-tested participatory qualitative methodologies, typically used in Vulnerability and Capacity Analyses, such as semi-structured interviews with so-called ‘key informants’, transect walks, participatory risk and social resource mapping are applied. The bio-physical analysis of the current environmental conditions determining hazards and structural vulnerability are obtained from remote sensing analysis, field work studies, and GIS analysis The assessment of the consequences of migration in the area of origin is linked with a mapping and appraisal of land management practices (www.wocat.net, Schwilch et al., 2011). The WOCAT mapping tool (WOCAT/LADA/DESIRE 2008) allows capturing the major land management practices / technologies, their spread, effectiveness and impact within a selected area. Data drawn from a variety of sources are compiled and harmonised by a team of experts, consisting of land degradation and conservation specialists working in consultation with land users from various backgrounds. The specialists’ and land users’ knowledge is combined with existing datasets and documents (maps, GIS layers, high-resolution satellite images, etc.) in workshops that are designed to build consensus regarding the variables used to assess land degradation and SLM. This process is also referred to as participatory expert assessment or consensus mapping. The WOCAT mapping and SLM documentation methodologies are used together with participatory mapping and other socio-economic data collection (interviews, questionnaires, focus group discussions, expert consultation) to combine information about migration types and land management issues. GIS and other spatial visualization tools (e.g. Google maps) will help to represent and understand these links. FIRST RESULTS Nepal In Nepal, migration is a common strategy to improve the livelihoods. Migrants are mostly men and they migrate to other Asian countries, first to India and then to the Gulf countries. Only a few women are migrating abroad. Women migrate essentially to main Nepali cities when they can afford it. Remittances are used primarily for food and education; however they are hardly used for agricultural purposes. Besides traditional agriculture being maintained, only few new practices are emerging, such as vegetable farming or agroforestry. The land abandonment is a growing consequence of outmigration, resulting in the spreading of invasive species. However, most impacts of migration on land management are not yet clear. Moreover, education is a major concern for the respondents; they want their children having a better education and get better opportunities. Linked to this, unemployment is another major concern of the respondents, which in turn is “solved” through outmigration. Bolivia Migration is a common livelihood strategy in Bolivia. In the area of study, whole families are migrating downward to the cities of the valleys or to other departments of Bolivia, especially to Chapare (tropics) for the coca production and to Santa Cruz. Some young people are migrating abroad, mostly to Argentina. There are few remittances and if those are sent to the families in the mountain areas, then they are mainly used for agriculture purpose. The impacts of migration on land management practices are not clear although there are some important aspects to be underlined. The people who move downward are still using their land and coming back during part of the week to work on it. As a consequence of this multi-residency, there is a tendency to reduce land management work or to change the way the land is used. As in Nepal, education is a very important issue in this area. There is no secondary school, and only one community has a primary school. After the 6th grade students have therefore to go down into the valley towns to study. The lack of basic education is pushing more and more people to move down and to leave the mountains. CONCLUSIONS This study is on-going, more data have to be collected to clearly assess the impacts of out-migration on land management in mountain areas. The first results of the study allow us to present a few interesting findings. The two case studies are very different, however in both areas, young people are not staying anymore in the mountains and leave behind half families and elderly to manage the land. Additionally in both cases education is a major reason for moving out, even though the causes are not always the same. More specifically, in the case of Nepal, the use of remittances underlines the fact that investment in agriculture is not the first choice of a family. In the case of Bolivia, some interesting findings showed that people continue to work on their lands even if they move downward. The further steps of the study will help to explore these interesting issues in more detail. REFERENCES Schwilch G., Bestelmeyer B., Bunning S., Critchley W., Herrick J., Kellner K., Liniger H.P., Nachtergaele F., Ritsema C.J., Schuster B., Tabo R., van Lynden G., Winslow M. 2011. Experiences in Monitoring and Assessment of Sustainable Land Management. Land Degradation & Development 22 (2), 214-225. Doi 10.1002/ldr.1040 WOCAT/LADA/DESIRE 2008. A Questionnaire for Mapping Land Degradation and Sustainable Land Management. Liniger H.P., van Lynden G., Nachtergaele F., Schwilch G. (eds), Centre for Development and Environment, Institute of Geography, University of Berne, Berne
Resumo:
Integrated pest management is a viable alternative to traditional pest control methods. A paired sample design was utilized to measure the effect of IPM education on the number of cockroaches in a 200 unit, seven story public housing building for the elderly in Houston, TX. Glue traps were placed in 71 randomly selected apartments (5traps/unit) and left in place for two nights. Baseline cockroach counts were shared with the property manager, maintenance/janitorial staff, service coordinator, pest control professional and tenant representatives at the end of a one day “Integrated Pest Management in Multi-Family Housing” training course.^ There was a significant decrease in the average number of cockroaches after IPM education and implementation of IPM principles (P < 0.0003). Positive changes in behavior by members of the IPM team and changes in the housing authority operational plan were also found. Paired t-tests comparing the difference between mean cockroach counts at baseline and follow-up by location within the apartment all demonstrated a significant decrease in the number of cockroaches.^ Results supported the premise that IPM education and the implementation of IPM principles are effective measures to change pest control behaviors and control cockroaches. Cockroach infestations in multi-story housing are not solely determined by the actions of individual tenants. The actions of other residents, property managers and pest control professionals are also important factors in pest control.^ Findings support the implementation of IPM education and the adoption of IPM practices by public housing authorities. This study adds to existing evidence that clear communication of policies, a team approach and a commitment to ongoing inspection and monitoring of pests combined with corrective action to eliminate food, water and harborage and the judicial use of low risk pesticides have the potential to improve the living conditions of elderly residents living in public housing.^
Resumo:
El interés cada vez mayor por las redes de sensores inalámbricos pueden ser entendido simplemente pensando en lo que esencialmente son: un gran número de pequeños nodos sensores autoalimentados que recogen información o detectan eventos especiales y se comunican de manera inalámbrica, con el objetivo final de entregar sus datos procesados a una estación base. Los nodos sensores están densamente desplegados dentro del área de interés, se pueden desplegar al azar y tienen capacidad de cooperación. Por lo general, estos dispositivos son pequeños y de bajo costo, de modo que pueden ser producidos y desplegados en gran numero aunque sus recursos en términos de energía, memoria, velocidad de cálculo y ancho de banda están enormemente limitados. Detección, tratamiento y comunicación son tres elementos clave cuya combinación en un pequeño dispositivo permite lograr un gran número de aplicaciones. Las redes de sensores proporcionan oportunidades sin fin, pero al mismo tiempo plantean retos formidables, tales como lograr el máximo rendimiento de una energía que es escasa y por lo general un recurso no renovable. Sin embargo, los recientes avances en la integración a gran escala, integrado de hardware de computación, comunicaciones, y en general, la convergencia de la informática y las comunicaciones, están haciendo de esta tecnología emergente una realidad. Del mismo modo, los avances en la nanotecnología están empezando a hacer que todo gire entorno a las redes de pequeños sensores y actuadores distribuidos. Hay diferentes tipos de sensores tales como sensores de presión, acelerómetros, cámaras, sensores térmicos o un simple micrófono. Supervisan las condiciones presentes en diferentes lugares tales como la temperatura, humedad, el movimiento, la luminosidad, presión, composición del suelo, los niveles de ruido, la presencia o ausencia de ciertos tipos de objetos, los niveles de tensión mecánica sobre objetos adheridos y las características momentáneas tales como la velocidad , la dirección y el tamaño de un objeto, etc. Se comprobara el estado de las Redes Inalámbricas de Sensores y se revisaran los protocolos más famosos. Así mismo, se examinara la identificación por radiofrecuencia (RFID) ya que se está convirtiendo en algo actual y su presencia importante. La RFID tiene un papel crucial que desempeñar en el futuro en el mundo de los negocios y los individuos por igual. El impacto mundial que ha tenido la identificación sin cables está ejerciendo fuertes presiones en la tecnología RFID, los servicios de investigación y desarrollo, desarrollo de normas, el cumplimiento de la seguridad y la privacidad y muchos más. Su potencial económico se ha demostrado en algunos países mientras que otros están simplemente en etapas de planificación o en etapas piloto, pero aun tiene que afianzarse o desarrollarse a través de la modernización de los modelos de negocio y aplicaciones para poder tener un mayor impacto en la sociedad. Las posibles aplicaciones de redes de sensores son de interés para la mayoría de campos. La monitorización ambiental, la guerra, la educación infantil, la vigilancia, la micro-cirugía y la agricultura son solo unos pocos ejemplos de los muchísimos campos en los que tienen cabida las redes mencionadas anteriormente. Estados Unidos de América es probablemente el país que más ha investigado en esta área por lo que veremos muchas soluciones propuestas provenientes de ese país. Universidades como Berkeley, UCLA (Universidad de California, Los Ángeles) Harvard y empresas como Intel lideran dichas investigaciones. Pero no solo EE.UU. usa e investiga las redes de sensores inalámbricos. La Universidad de Southampton, por ejemplo, está desarrollando una tecnología para monitorear el comportamiento de los glaciares mediante redes de sensores que contribuyen a la investigación fundamental en glaciología y de las redes de sensores inalámbricos. Así mismo, Coalesenses GmbH (Alemania) y Zurich ETH están trabajando en diversas aplicaciones para redes de sensores inalámbricos en numerosas áreas. Una solución española será la elegida para ser examinada más a fondo por ser innovadora, adaptable y polivalente. Este estudio del sensor se ha centrado principalmente en aplicaciones de tráfico, pero no se puede olvidar la lista de más de 50 aplicaciones diferentes que ha sido publicada por la firma creadora de este sensor específico. En la actualidad hay muchas tecnologías de vigilancia de vehículos, incluidos los sensores de bucle, cámaras de video, sensores de imagen, sensores infrarrojos, radares de microondas, GPS, etc. El rendimiento es aceptable, pero no suficiente, debido a su limitada cobertura y caros costos de implementación y mantenimiento, especialmente este ultimo. Tienen defectos tales como: línea de visión, baja exactitud, dependen mucho del ambiente y del clima, no se puede realizar trabajos de mantenimiento sin interrumpir las mediciones, la noche puede condicionar muchos de ellos, tienen altos costos de instalación y mantenimiento, etc. Por consiguiente, en las aplicaciones reales de circulación, los datos recibidos son insuficientes o malos en términos de tiempo real debido al escaso número de detectores y su costo. Con el aumento de vehículos en las redes viales urbanas las tecnologías de detección de vehículos se enfrentan a nuevas exigencias. Las redes de sensores inalámbricos son actualmente una de las tecnologías más avanzadas y una revolución en la detección de información remota y en las aplicaciones de recogida. Las perspectivas de aplicación en el sistema inteligente de transporte son muy amplias. Con este fin se ha desarrollado un programa de localización de objetivos y recuento utilizando una red de sensores binarios. Esto permite que el sensor necesite mucha menos energía durante la transmisión de información y que los dispositivos sean más independientes con el fin de tener un mejor control de tráfico. La aplicación se centra en la eficacia de la colaboración de los sensores en el seguimiento más que en los protocolos de comunicación utilizados por los nodos sensores. Las operaciones de salida y retorno en las vacaciones son un buen ejemplo de por qué es necesario llevar la cuenta de los coches en las carreteras. Para ello se ha desarrollado una simulación en Matlab con el objetivo localizar objetivos y contarlos con una red de sensores binarios. Dicho programa se podría implementar en el sensor que Libelium, la empresa creadora del sensor que se examinara concienzudamente, ha desarrollado. Esto permitiría que el aparato necesitase mucha menos energía durante la transmisión de información y los dispositivos sean más independientes. Los prometedores resultados obtenidos indican que los sensores de proximidad binarios pueden formar la base de una arquitectura robusta para la vigilancia de áreas amplias y para el seguimiento de objetivos. Cuando el movimiento de dichos objetivos es suficientemente suave, no tiene cambios bruscos de trayectoria, el algoritmo ClusterTrack proporciona un rendimiento excelente en términos de identificación y seguimiento de trayectorias los objetos designados como blancos. Este algoritmo podría, por supuesto, ser utilizado para numerosas aplicaciones y se podría seguir esta línea de trabajo para futuras investigaciones. No es sorprendente que las redes de sensores de binarios de proximidad hayan atraído mucha atención últimamente ya que, a pesar de la información mínima de un sensor de proximidad binario proporciona, las redes de este tipo pueden realizar un seguimiento de todo tipo de objetivos con la precisión suficiente. Abstract The increasing interest in wireless sensor networks can be promptly understood simply by thinking about what they essentially are: a large number of small sensing self-powered nodes which gather information or detect special events and communicate in a wireless fashion, with the end goal of handing their processed data to a base station. The sensor nodes are densely deployed inside the phenomenon, they deploy random and have cooperative capabilities. Usually these devices are small and inexpensive, so that they can be produced and deployed in large numbers, and so their resources in terms of energy, memory, computational speed and bandwidth are severely constrained. Sensing, processing and communication are three key elements whose combination in one tiny device gives rise to a vast number of applications. Sensor networks provide endless opportunities, but at the same time pose formidable challenges, such as the fact that energy is a scarce and usually non-renewable resource. However, recent advances in low power Very Large Scale Integration, embedded computing, communication hardware, and in general, the convergence of computing and communications, are making this emerging technology a reality. Likewise, advances in nanotechnology and Micro Electro-Mechanical Systems are pushing toward networks of tiny distributed sensors and actuators. There are different sensors such as pressure, accelerometer, camera, thermal, and microphone. They monitor conditions at different locations, such as temperature, humidity, vehicular movement, lightning condition, pressure, soil makeup, noise levels, the presence or absence of certain kinds of objects, mechanical stress levels on attached objects, the current characteristics such as speed, direction and size of an object, etc. The state of Wireless Sensor Networks will be checked and the most famous protocols reviewed. As Radio Frequency Identification (RFID) is becoming extremely present and important nowadays, it will be examined as well. RFID has a crucial role to play in business and for individuals alike going forward. The impact of ‘wireless’ identification is exerting strong pressures in RFID technology and services research and development, standards development, security compliance and privacy, and many more. The economic value is proven in some countries while others are just on the verge of planning or in pilot stages, but the wider spread of usage has yet to take hold or unfold through the modernisation of business models and applications. Possible applications of sensor networks are of interest to the most diverse fields. Environmental monitoring, warfare, child education, surveillance, micro-surgery, and agriculture are only a few examples. Some real hardware applications in the United States of America will be checked as it is probably the country that has investigated most in this area. Universities like Berkeley, UCLA (University of California, Los Angeles) Harvard and enterprises such as Intel are leading those investigations. But not just USA has been using and investigating wireless sensor networks. University of Southampton e.g. is to develop technology to monitor glacier behaviour using sensor networks contributing to fundamental research in glaciology and wireless sensor networks. Coalesenses GmbH (Germany) and ETH Zurich are working in applying wireless sensor networks in many different areas too. A Spanish solution will be the one examined more thoroughly for being innovative, adaptable and multipurpose. This study of the sensor has been focused mainly to traffic applications but it cannot be forgotten the more than 50 different application compilation that has been published by this specific sensor’s firm. Currently there are many vehicle surveillance technologies including loop sensors, video cameras, image sensors, infrared sensors, microwave radar, GPS, etc. The performance is acceptable but not sufficient because of their limited coverage and expensive costs of implementation and maintenance, specially the last one. They have defects such as: line-ofsight, low exactness, depending on environment and weather, cannot perform no-stop work whether daytime or night, high costs for installation and maintenance, etc. Consequently, in actual traffic applications the received data is insufficient or bad in terms of real-time owed to detector quantity and cost. With the increase of vehicle in urban road networks, the vehicle detection technologies are confronted with new requirements. Wireless sensor network is the state of the art technology and a revolution in remote information sensing and collection applications. It has broad prospect of application in intelligent transportation system. An application for target tracking and counting using a network of binary sensors has been developed. This would allow the appliance to spend much less energy when transmitting information and to make more independent devices in order to have a better traffic control. The application is focused on the efficacy of collaborative tracking rather than on the communication protocols used by the sensor nodes. Holiday crowds are a good case in which it is necessary to keep count of the cars on the roads. To this end a Matlab simulation has been produced for target tracking and counting using a network of binary sensors that e.g. could be implemented in Libelium’s solution. Libelium is the enterprise that has developed the sensor that will be deeply examined. This would allow the appliance to spend much less energy when transmitting information and to make more independent devices. The promising results obtained indicate that binary proximity sensors can form the basis for a robust architecture for wide area surveillance and tracking. When the target paths are smooth enough ClusterTrack particle filter algorithm gives excellent performance in terms of identifying and tracking different target trajectories. This algorithm could, of course, be used for different applications and that could be done in future researches. It is not surprising that binary proximity sensor networks have attracted a lot of attention lately. Despite the minimal information a binary proximity sensor provides, networks of these sensing modalities can track all kinds of different targets classes accurate enough.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.