862 resultados para Data processing.
Resumo:
Using Wireless Sensor Networks (WSNs) in healthcare systems has had a lot of attention in recent years. In much of this research tasks like sensor data processing, health states decision making and emergency message sending are done by a remote server. Many patients with lots of sensor data consume a great deal of communication resources, bring a burden to the remote server and delay the decision time and notification time. A healthcare application for elderly people using WSN has been simulated in this paper. A WSN designed for the proposed healthcare application needs efficient MAC and routing protocols to provide a guarantee for the reliability of the data delivered from the patients to the medical centre. Based on these requirements, A cross layer based on the modified versions of APTEEN and GinMAC has been designed and implemented, with new features, such as a mobility module and routes discovery algorithms have been added. Simulation results show that the proposed cross layer based protocol can conserve energy for nodes and provide the required performance such as life time of the network, delay and reliability for the proposed healthcare application.
Resumo:
Using Wireless Sensor Networks (WSNs) in healthcare systems has had a lot of attention in recent years. In much of this research tasks like sensor data processing, health states decision making and emergency message sending are done by a remote server. Many patients with lots of sensor data consume a great deal of communication resources, bring a burden to the remote server and delay the decision time and notification time. A healthcare application for elderly people using WSN has been simulated in this paper. A WSN designed for the proposed healthcare application needs efficient Medium Access Control (MAC) and routing protocols to provide a guarantee for the reliability of the data delivered from the patients to the medical centre. Based on these requirements, the GinMAC protocol including a mobility module has been chosen, to provide the required performance such as reliability for data delivery and energy saving. Simulation results show that this modification to GinMAC can offer the required performance for the proposed healthcare application.
Resumo:
Using Wireless Sensor Networks (WSNs) in healthcare systems has had a lot of attention in recent years. In much of this research tasks like sensor data processing, health states decision making and emergency message sending are done by a remote server. Many patients with lots of sensor data consume a great deal of communication resources, bring a burden to the remote server and delay the decision time and notification time. A healthcare application for elderly people using WSN has been simulated in this paper. A WSN designed for the proposed healthcare application needs efficient MAC and routing protocols to provide a guarantee for the reliability of the data delivered from the patients to the medical centre. Based on these requirements, the GinMAC protocol including a mobility module has been chosen, to provide the required performance such as reliability for data delivery and energy saving. Simulation results show that this modification to GinMAC can offer the required performance for the proposed healthcare application.
Resumo:
This special issue is focused on the assessment of algorithms for the observation of Earth’s climate from environ- mental satellites. Climate data records derived by remote sensing are increasingly a key source of insight into the workings of and changes in Earth’s climate system. Producers of data sets must devote considerable effort and expertise to maximise the true climate signals in their products and minimise effects of data processing choices and changing sensors. A key choice is the selection of algorithm(s) for classification and/or retrieval of the climate variable. Within the European Space Agency Climate Change Initiative, science teams undertook systematic assessment of algorithms for a range of essential climate variables. The papers in the special issue report some of these exercises (for ocean colour, aerosol, ozone, greenhouse gases, clouds, soil moisture, sea surface temper- ature and glaciers). The contributions show that assessment exercises must be designed with care, considering issues such as the relative importance of different aspects of data quality (accuracy, precision, stability, sensitivity, coverage, etc.), the availability and degree of independence of validation data and the limitations of validation in characterising some important aspects of data (such as long-term stability or spatial coherence). As well as re- quiring a significant investment of expertise and effort, systematic comparisons are found to be highly valuable. They reveal the relative strengths and weaknesses of different algorithmic approaches under different observa- tional contexts, and help ensure that scientific conclusions drawn from climate data records are not influenced by observational artifacts, but are robust.
Resumo:
Objectives. To study mortality trends related to Chagas disease taking into account all mentions of this cause listed on any line or part of the death certificate. Methods. Mortality data for 1985-2006 were obtained from the multiple cause-of-death database maintained by the Sao Paulo State Data Analysis System (SEADE). Chagas disease was classified as the underlying cause-of-death or as an associated cause-of-death (non-underlying). The total number of times Chagas disease was mentioned on the death certificates was also considered. Results. During this 22-year period, there were 40 002 deaths related to Chagas disease: 34 917 (87.29%) classified as the underlying cause-of-death and 5 085 (12.71%) as an associated cause-of-death. The results show a 56.07% decline in the death rate due to Chagas disease as the underlying cause and a stabilized rate as associated cause. The number of deaths was 44.5% higher among men. The fact that 83.5% of the deaths occurred after 45 years of age reflects a cohort effect. The main causes associated with Chagas disease as the underlying cause-of-death were direct complications due to cardiac involvement, such as conduction disorders, arrhythmias and heart failure. Ischemic heart disease, cerebrovascular disorders and neoplasms were the main underlying causes when Chagas was an associated cause-of-death. Conclusions. For the total mentions to Chagas disease, a 51.34% decline in the death rate was observed, whereas the decline in the number of deaths was only 5.91%, being lower among women and showing a shift of deaths to older age brackets. Using the multiple cause-of-death method contributed to the understanding of the natural history of Chagas disease.
Resumo:
A class full of students learning electric are shown in an electrical classroom at the New York Trade School. In the upper left-hand corner of the room a sign reminding the students to think about safety can be seen. It reads, "Beware of Live Wires: Students are warned not to make, or disconnect hookups, without first opening the switch that controls the flow of current. Observe Safety First at All Times." Black and white photograph.
Resumo:
OBJECTIVES: To develop a method for objective assessment of fine motor timing variability in Parkinson’s disease (PD) patients, using digital spiral data gathered by a touch screen device. BACKGROUND: A retrospective analysis was conducted on data from 105 subjects including65 patients with advanced PD (group A), 15 intermediate patients experiencing motor fluctuations (group I), 15 early stage patients (group S), and 10 healthy elderly subjects (HE) were examined. The subjects were asked to perform repeated upper limb motor tasks by tracing a pre-drawn Archimedes spiral as shown on the screen of the device. The spiral tracing test was performed using an ergonomic pen stylus, using dominant hand. The test was repeated three times per test occasion and the subjects were instructed to complete it within 10 seconds. Digital spiral data including stylus position (x-ycoordinates) and timestamps (milliseconds) were collected and used in subsequent analysis. The total number of observations with the test battery were as follows: Swedish group (n=10079), Italian I group (n=822), Italian S group (n = 811), and HE (n=299). METHODS: The raw spiral data were processed with three data processing methods. To quantify motor timing variability during spiral drawing tasks Approximate Entropy (APEN) method was applied on digitized spiral data. APEN is designed to capture the amount of irregularity or complexity in time series. APEN requires determination of two parameters, namely, the window size and similarity measure. In our work and after experimentation, window size was set to 4 and similarity measure to 0.2 (20% of the standard deviation of the time series). The final score obtained by APEN was normalized by total drawing completion time and used in subsequent analysis. The score generated by this method is hence on denoted APEN. In addition, two more methods were applied on digital spiral data and their scores were used in subsequent analysis. The first method was based on Digital Wavelet Transform and Principal Component Analysis and generated a score representing spiral drawing impairment. The score generated by this method is hence on denoted WAV. The second method was based on standard deviation of frequency filtered drawing velocity. The score generated by this method is hence on denoted SDDV. Linear mixed-effects (LME) models were used to evaluate mean differences of the spiral scores of the three methods across the four subject groups. Test-retest reliability of the three scores was assessed after taking mean of the three possible correlations (Spearman’s rank coefficients) between the three test trials. Internal consistency of the methods was assessed by calculating correlations between their scores. RESULTS: When comparing mean spiral scores between the four subject groups, the APEN scores were different between HE subjects and three patient groups (P=0.626 for S group with 9.9% mean value difference, P=0.089 for I group with 30.2%, and P=0.0019 for A group with 44.1%). However, there were no significant differences in mean scores of the other two methods, except for the WAV between the HE and A groups (P<0.001). WAV and SDDV were highly and significantly correlated to each other with a coefficient of 0.69. However, APEN was not correlated to neither WAV nor SDDV with coefficients of 0.11 and 0.12, respectively. Test-retest reliability coefficients of the three scores were as follows: APEN (0.9), WAV(0.83) and SD-DV (0.55). CONCLUSIONS: The results show that the digital spiral analysis-based objective APEN measure is able to significantly differentiate the healthy subjects from patients at advanced level. In contrast to the other two methods (WAV and SDDV) that are designed to quantify dyskinesias (over-medications), this method can be useful for characterizing Off symptoms in PD. The APEN was not correlated to none of the other two methods indicating that it measures a different construct of upper limb motor function in PD patients than WAV and SDDV. The APEN also had a better test-retest reliability indicating that it is more stable and consistent over time than WAV and SDDV.
Resumo:
Drinking water utilities in urban areas are focused on finding smart solutions facing new challenges in their real-time operation because of limited water resources, intensive energy requirements, a growing population, a costly and ageing infrastructure, increasingly stringent regulations, and increased attention towards the environmental impact of water use. Such challenges force water managers to monitor and control not only water supply and distribution, but also consumer demand. This paper presents and discusses novel methodologies and procedures towards an integrated water resource management system based on advanced ICT technologies of automation and telecommunications for largely improving the efficiency of drinking water networks (DWN) in terms of water use, energy consumption, water loss minimization, and water quality guarantees. In particular, the paper addresses the first results of the European project EFFINET (FP7-ICT2011-8-318556) devoted to the monitoring and control of the DWN in Barcelona (Spain). Results are split in two levels according to different management objectives: (i) the monitoring level is concerned with all the aspects involved in the observation of the current state of a system and the detection/diagnosis of abnormal situations. It is achieved through sensors and communications technology, together with mathematical models; (ii) the control level is concerned with computing the best suitable and admissible control strategies for network actuators as to optimize a given set of operational goals related to the performance of the overall system. This level covers the network control (optimal management of water and energy) and the demand management (smart metering, efficient supply). The consideration of the Barcelona DWN as the case study will allow to prove the general applicability of the proposed integrated ICT solutions and their effectiveness in the management of DWN, with considerable savings of electricity costs and reduced water loss while ensuring the high European standards of water quality to citizens.
Resumo:
Smart water metering technologies for residential buildings offer, in principle, great opportunities for sustainable urban water management. However, much of this potential is as yet unrealized. Despite that several ICT solutions have already been deployed aiming at optimum operations on the water utilities side (e.g. real time control for water networks, dynamic pump scheduling etc.), little work has been done to date on the consumer side. This paper presents a web-based platform targeting primarily the household end user. The platform enables consumers to monitor, on a real-time basis, the water demand of their household, providing feedback not only on the total water consumption and relevant costs but also on the efficiency (or otherwise) of specific indoor and outdoor uses. Targeting the reduction of consumption, the provided feedback is combined with notifications about possible leakages\bursts, and customised suggestions to improve the efficiency of existing household uses. It also enables various comparisons, with past consumption or even with that of similar households, aiming to motivate further the householder to become an active player in the water efficiency challenge. The issue of enhancing the platform’s functionality with energy timeseries is also discussed in view of recent advances in smart metering and the concept of “smart cities”. The paper presents a prototype of this web-based application and critically discusses first testing results and insights. It also presents the way in which the platform communicates with central databases, at the water utility level. It is suggested that such developments are closing the gap between technology availability and usefulness to end users and could help both the uptake of smart metering and awareness raising leading, potentially, to significant reductions of urban water consumption. The work has received funding from the European Union FP7 Programme through the iWIDGET Project, under grant agreement no318272.
Resumo:
Drinking water distribution networks risk exposure to malicious or accidental contamination. Several levels of responses are conceivable. One of them consists to install a sensor network to monitor the system on real time. Once a contamination has been detected, this is also important to take appropriate counter-measures. In the SMaRT-OnlineWDN project, this relies on modeling to predict both hydraulics and water quality. An online model use makes identification of the contaminant source and simulation of the contaminated area possible. The objective of this paper is to present SMaRT-OnlineWDN experience and research results for hydraulic state estimation with sampling frequency of few minutes. A least squares problem with bound constraints is formulated to adjust demand class coefficient to best fit the observed values at a given time. The criterion is a Huber function to limit the influence of outliers. A Tikhonov regularization is introduced for consideration of prior information on the parameter vector. Then the Levenberg-Marquardt algorithm is applied that use derivative information for limiting the number of iterations. Confidence intervals for the state prediction are also given. The results are presented and discussed on real networks in France and Germany.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Guias para exploração mineral são normalmente baseados em modelos conceituais de depósitos. Esses guias são, normalmente, baseados na experiência dos geólogos, em dados descritivos e em dados genéticos. Modelamentos numéricos, probabilísticos e não probabilísticos, para estimar a ocorrência de depósitos minerais é um novo procedimento que vem a cada dia aumentando sua utilização e aceitação pela comunidade geológica. Essa tese utiliza recentes metodologias para a geração de mapas de favorablidade mineral. A denominada Ilha Cristalina de Rivera, uma janela erosional da Bacia do Paraná, situada na porção norte do Uruguai, foi escolhida como estudo de caso para a aplicação das metodologias. A construção dos mapas de favorabilidade mineral foi feita com base nos seguintes tipos de dados, informações e resultados de prospecção: 1) imagens orbitais; 2) prospecção geoquimica; 3) prospecção aerogeofísica; 4) mapeamento geo-estrutural e 5) altimetria. Essas informacões foram selecionadas e processadas com base em um modelo de depósito mineral (modelo conceitual), desenvolvido com base na Mina de Ouro San Gregorio. O modelo conceitual (modelo San Gregorio), incluiu características descritivas e genéticas da Mina San Gregorio, a qual abrange os elementos característicos significativos das demais ocorrências minerais conhecidas na Ilha Cristalina de Rivera. A geração dos mapas de favorabilidade mineral envolveu a construção de um banco de dados, o processamento dos dados, e a integração dos dados. As etapas de construção e processamento dos dados, compreenderam a coleta, a seleção e o tratamento dos dados de maneira a constituírem os denominados Planos de Informação. Esses Planos de Informação foram gerados e processados organizadamente em agrupamentos, de modo a constituírem os Fatores de Integração para o mapeamento de favorabilidade mineral na Ilha Cristalina de Rivera. Os dados foram integrados por meio da utilização de duas diferentes metodologias: 1) Pesos de Evidência (dirigida pelos dados) e 2) Lógica Difusa (dirigida pelo conhecimento). Os mapas de favorabilidade mineral resultantes da implementação das duas metodologias de integração foram primeiramente analisados e interpretados de maneira individual. Após foi feita uma análise comparativa entre os resultados. As duas metodologias xxiv obtiveram sucesso em identificar, como áreas de alta favorabilidade, as áreas mineralizadas conhecidas, além de outras áreas ainda não trabalhadas. Os mapas de favorabilidade mineral resultantes das duas metodologias mostraram-se coincidentes em relação as áreas de mais alta favorabilidade. A metodologia Pesos de Evidência apresentou o mapa de favorabilidade mineral mais conservador em termos de extensão areal, porém mais otimista em termos de valores de favorabilidade em comparação aos mapas de favorabilidade mineral resultantes da implementação da metodologia Lógica Difusa. Novos alvos para exploração mineral foram identificados e deverão ser objeto de investigação em detalhe.
Resumo:
A dissertação apresenta o processo de trabalho do inquérito policial como sendo o processo-chave do sistema de informações policiais de uma delegacia de Polícia Civil do Estado do Rio de Janeiro. Faz considerações quanto à gestão e à análise de sistemas, comparando alguns especialistas, para compreensão da organização. Apresenta, também, uma visão da Polícia Civil no presente e a sua história. Disserta sobre a importância da linguagem em sua cultura organizacional e na sua lógica do trabalho apoiada em discursos, e propõe algumas possibilidades de mudanças organizacionais. A metodologia adotada foi a de pesquisa-ação, participante e etnográfica, com tratamento dos dados apoiado nas teorias da informação, da análise de conteúdo do discurso e de sistemas.
Resumo:
In the last years, a number of enterprises ¿ each time greater ¿ have perceived the importance of the strategic management of the intellectual capital in its strategic planning. Factors like globalization of the economy and consequent awareness of specialized work value enclosed in organizational processes and routines, awareness of knowledge as factor of distinct production and low cost of data processing nets, point to a growing replacement of physical force for cerebral in our organizations and in our social lives. This work has the objective to analyze how the information technology, in its present stage, may contribute to creation and development of knowledge, or intellectual capital, in the organization of business. We will use for this purpose the methodology proposed by Nonaka and Takeuchi for creation of knowledge in the organizations. The model is based in two basic points: 1) the existence of two types of knowledge, the tacit and the explicit and its several processes of interaction wich generate operational knowledge (internalization), systematic knowledge (combination), shared knowledge (socialization) and conceptual knowledge (externalization); 2) the view that knowledge in principle is individual, belongs to each member of the organization and must be enlarged 'in an organization way'. Considering the characteristics of the methodology used, the proposal is the construction of a knowledge portal in the organization of business, so that it may be used as a tool for helping to create and develop organizational knowledge.
Resumo:
Os processamentos de imagens orbitais efetuados através de técnicas de sensoriamento remoto geraram informações qualitativas de natureza textural (morfo-estruturas). Estas permitiram (1) o reconhecimento de áreas com diferentes padrões estruturais tendo diferentes potencialidades para a prospecção de fluorita, (2) a identificação de novos lineamentos estruturais potencialmente favoráveis à mineralização e (3) evidenciaram prolongamentos extensos para as principais estruturas mineralizadas, (4) às quais se associam um grande número de estruturas, antes desconhecidas, com grande potencial prospectivo. O aprimoramento de técnicas de classificação digital sobre produtos de razões de bandas e análise por componentes principais permitiu identificar a alteração hidrotermal associada às estruturas, incorporando novos critérios para a prospecção de fluorita. Buscando-se quantificar os dados de alteração hidrotermal, foi efetuada a análise espectrorradiométrica das rochas do distrito fluorítico. Integrando estas informações com dados TM LANDSAT 5, em nível de reflectância, obteve-se a classificação espectral das imagens orbitais, o que permitiu a identificação de estruturas menores com um detalhe nunca antes obtido. Os processamentos de dados aerogeofísicos forneceram resultados sobre estruturas (magnetometria) e corpos graníticos afetados por alteração hidrotermal (aerogamaespectrometria). Estes produtos foram integrados com dados TM LANDSAT 5 associando o atributo textural da imagem orbital ao comportamento radiométrico das rochas. Diagnosticou-se o lineamento Grão-Pará como o principal prospecto do distrito. E levantaram-se uma série de dados sobre a compartimentação tectônica da região, a zonação de fácies das rochas graníticas (rocha fonte do flúor) e as alterações hidrotermais associadas ao magmatismo granítico. Isto permitiu a compreensão da distribuição regional dos depósitos de fluorita, adicionando-se um novo critério à prospecção de fluorita, a relação espacial entre a mineralização e a rocha fonte de F. Esta última corresponde à fácies granítica da borda do Maciço Pedras Grandes.