996 resultados para Distributed monitoring
Resumo:
This paper describes an urban traffic control system which aims at contributing to a more efficient traffic management system in the cities of Brazil. It uses fuzzy sets, case-based reasoning, and genetic algorithms to handle dynamic and unpredictable traffic scenarios, as well as uncertain, incomplete, and inconsistent information. The system is composed by one supervisor and several controller agents, which cooperate with each other to improve the system's results through Artificial Intelligence Techniques.
Resumo:
2-Methylisoborneol (MIB) and geosmin (GSM) are sub products from algae decomposition and, depending on their concentration, can be toxic: otherwise, they give unpleasant taste and odor to water. For water treatment companies it is important to constantly monitor their presence in the distributed water and avoid further costumer complaints. Lower-cost and easy-to-read instrumentation would be very promising in this regard. In this study, we evaluate the potentiality of an electronic tongue (ET) system based on non-specific polymeric sensors and impedance measurements in monitoring MIB and GSM in water samples. Principal component analysis (PCA) applied to the generated data matrix indicated that this ET was capable to perform with remarkable reproducibility the discrimination of these two contaminants in either distilled or tap water, in concentrations as low as 25 ng L-1. Nonetheless, this analysis methodology was rather qualitative and laborious, and the outputs it provided were greatly subjective. Also, data analysis based on PCA severely restricts automation of the measuring system or its use by non-specialized operators. To circumvent these drawbacks, a fuzzy controller was designed to quantitatively perform sample classification while providing outputs in simpler data charts. For instance, the ET along with the referred fuzzy controller performed with a 100% hit rate the quantification of MIB and GSM samples in distilled and tap water. The hit rate could be read directly from the plot. The lower cost of these polymeric sensors allied to the especial features of the fuzzy controller (easiness on programming and numerical outputs) provided initial requirements for developing an automated ET system to monitor odorant species in water production and distribution. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This work has been supported by Brazilian agencies FAPESP, CNPq, CAPES and grants MICINN BFU200908473 and TIN 201019607, SpanishBrazilian Cooperation PHB20070008 and 7ª Convocatoria De PROYECTOS de COOPERACION INTERUNIVERSITARIAUAMSANTANDER con America Latina
Resumo:
Abstract Introduction Several studies have shown that maximizing stroke volume (or increasing it until a plateau is reached) by volume loading during high-risk surgery may improve post-operative outcome. This goal could be achieved simply by minimizing the variation in arterial pulse pressure (ΔPP) induced by mechanical ventilation. We tested this hypothesis in a prospective, randomized, single-centre study. The primary endpoint was the length of postoperative stay in hospital. Methods Thirty-three patients undergoing high-risk surgery were randomized either to a control group (group C, n = 16) or to an intervention group (group I, n = 17). In group I, ΔPP was continuously monitored during surgery by a multiparameter bedside monitor and minimized to 10% or less by volume loading. Results Both groups were comparable in terms of demographic data, American Society of Anesthesiology score, type, and duration of surgery. During surgery, group I received more fluid than group C (4,618 ± 1,557 versus 1,694 ± 705 ml (mean ± SD), P < 0.0001), and ΔPP decreased from 22 ± 75 to 9 ± 1% (P < 0.05) in group I. The median duration of postoperative stay in hospital (7 versus 17 days, P < 0.01) was lower in group I than in group C. The number of postoperative complications per patient (1.4 ± 2.1 versus 3.9 ± 2.8, P < 0.05), as well as the median duration of mechanical ventilation (1 versus 5 days, P < 0.05) and stay in the intensive care unit (3 versus 9 days, P < 0.01) was also lower in group I. Conclusion Monitoring and minimizing ΔPP by volume loading during high-risk surgery improves postoperative outcome and decreases the length of stay in hospital. Trial registration NCT00479011
Resumo:
Background Duchenne muscular dystrophy (DMD) is a sex-linked inherited muscle disease characterized by a progressive loss in muscle strength and respiratory muscle involvement. After 12 years of age, lung function declines at a rate of 6 % to 10.7 % per year in patients with DMD. Steroid therapy has been proposed to delay the loss of motor function and also the respiratory involvement. Method In 21 patients with DMD aged between seven and 16 years, the forced vital capacity (FVC) and the forced expiratory volume in one second (FEV1) were evaluated at three different times during a period of two years. Results We observed in this period of evaluation the maintenance of the FVC and the FEV1 in this group of patients independently of chronological age, age at onset of steroid therapy, and walking capacity. Conclusion The steroid therapy has the potential to stabilize or delay the loss of lung function in DMD patients even if they are non-ambulant or older than 10 years, and in those in whom the medication was started after 7 years of age.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
Wireless Sensor Networks (WSNs) are getting wide-spread attention since they became easily accessible with their low costs. One of the key elements of WSNs is distributed sensing. When the precise location of a signal of interest is unknown across the monitored region, distributing many sensors randomly/uniformly may yield with a better representation of the monitored random process than a traditional sensor deployment. In a typical WSN application the data sensed by nodes is usually sent to one (or more) central device, denoted as sink, which collects the information and can either act as a gateway towards other networks (e.g. Internet), where data can be stored, or be processed in order to command the actuators to perform special tasks. In such a scenario, a dense sensor deployment may create bottlenecks when many nodes competing to access the channel. Even though there are mitigation methods on the channel access, concurrent (parallel) transmissions may occur. In this study, always on the scope of monitoring applications, the involved development progress of two industrial projects with dense sensor deployments (eDIANA Project funded by European Commission and Centrale Adritica Project funded by Coop Italy) and the measurement results coming from several different test-beds evoked the necessity of a mathematical analysis on concurrent transmissions. To the best of our knowledge, in the literature there is no mathematical analysis of concurrent transmission in 2.4 GHz PHY of IEEE 802.15.4. In the thesis, experience stories of eDIANA and Centrale Adriatica Projects and a mathematical analysis of concurrent transmissions starting from O-QPSK chip demodulation to the packet reception rate with several different types of theoretical demodulators, are presented. There is a very good agreement between the measurements so far in the literature and the mathematical analysis.
Resumo:
INTRODUCTION: The incidence of bloodstream infection (BSI) in extracorporeal life support (ECLS) is reported between 0.9 and 19.5%. In January 2006, the Extracorporeal Life Support Organization (ELSO) reported an overall incidence of 8.78% distributed as follows: respiratory: 6.5% (neonatal), 20.8% (pediatric); cardiac: 8.2% (neonatal) and 12.6% (pediatric). METHOD: At BC Children's Hospital (BCCH) daily surveillance blood cultures (BC) are performed and antibiotic prophylaxis is not routinely recommended. Positive BC (BC+) were reviewed, including resistance profiles, collection time of BC+, time to positivity and mortality. White blood cell count, absolute neutrophile count, immature/total ratio, platelet count, fibrinogen and lactate were analyzed 48, 24 and 0 h prior to BSI. A univariate linear regression analysis was performed. RESULTS: From 1999 to 2005, 89 patients underwent ECLS. After exclusion, 84 patients were reviewed. The attack rate was 22.6% (19 BSI) and 13.1% after exclusion of coagulase-negative staphylococci (n = 8). BSI patients were significantly longer on ECLS (157 h) compared to the no-BSI group (127 h, 95% CI: 106-148). Six BSI patients died on ECLS (35%; 4 congenital diaphragmatic hernias, 1 hypoplastic left heart syndrome and 1 after a tetralogy repair). BCCH survival on ECLS was 71 and 58% at discharge, which is comparable to previous reports. No patient died primarily because of BSI. No BSI predictor was identified, although lactate may show a decreasing trend before BSI (P = 0.102). CONCLUSION: Compared with ELSO, the studied BSI incidence was higher with a comparable mortality. We speculate that our BSI rate is explained by underreporting of "contaminants" in the literature, the use of broad-spectrum antibiotic prophylaxis and a higher yield with daily monitoring BC. We support daily surveillance blood cultures as an alternative to antibiotic prophylaxis in the management of patients on ECLS.
Resumo:
Meat and meat products can be contaminated with different species of bacteria resistant to various antimicrobials. The human health risk of a type of meat or meat product carry by emerging antimicrobial resistance depends on (i) the prevalence of contamination with resistant bacteria, (ii) the human health consequences of an infection with a specific bacterium resistant to a specific antimicrobial and (iii) the consumption volume of a specific product. The objective of this study was to compare the risk for consumers arising from their exposure to antibiotic resistant bacteria from meat of four different types (chicken, pork, beef and veal), distributed in four different product categories (fresh meat, frozen meat, dried raw meat products and heat-treated meat products). A semi-quantitative risk assessment model, evaluating each food chain step, was built in order to get an estimated score for the prevalence of Campylobacter spp., Enterococcus spp. and Escherichia coli in each product category. To assess human health impact, nine combinations of bacterial species and antimicrobial agents were considered based on a published risk profile. The combination of the prevalence at retail, the human health impact and the amount of meat or product consumed, provided the relative proportion of total risk attributed to each category of product, resulting in a high, medium or low human health risk. According to the results of the model, chicken (mostly fresh and frozen meat) contributed 6.7% of the overall risk in the highest category and pork (mostly fresh meat and dried raw meat products) contributed 4.0%. The contribution of beef and veal was of 0.4% and 0.1% respectively. The results were tested and discussed for single parameter changes of the model. This risk assessment was a useful tool for targeting antimicrobial resistance monitoring to those meat product categories where the expected risk for public health was greater.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
Increasing commercial pressures on land are provoking fundamental and far-reaching changes in the relationships between people and land. Much knowledge on land-oriented investments projects currently comes from the media. Although this provides a good starting point, lack of transparency and rapidly changing contexts mean that this is often unreliable. The International Land Coalition, in partnership with Oxfam Novib, Centre de coopération internationale en recherche agronomique pour le développement (CIRAD), University of Pretoria, Centre for Development and Environment of the University of Bern (CDE), and GIZ, started to compile an inventory of land-related investments. This project aims to better understand the extent, trends and impacts of land-related investments by supporting an ongoing and systematic stocktaking exercise of the various investment projects currently taking place worldwide. It involves a large number of organizations and individuals working in areas where land transactions are being made, and able to provide details of such investments. The project monitors land transactions in rural areas that imply a transformation of land use rights from communities and smallholders to commercial use, and are made both by domestic and foreign investors (private actors, governments, government-back private investors). The focus is on investments for food or agrofuel production, timber extraction, carbon trading, mineral extraction, conservation and tourism. A novel way of using ITC to document land acquisitions in a spatially explicit way and by using an approach called “crowdsourcing” is being developed. This approach will allow actors to share information and knowledge directly and at any time on a public platform, where it will be scrutinized in terms of reliability and cross checked with other sources. Up to now, over 1200 deals have been recorded across 96 countries. Details of such transactions have been classified in a matrix and distributed to over 350 contacts worldwide for verification. The verified information has been geo-referenced and represented in two global maps. This is an open database enabling a continued monitoring exercise and the improvement of data accuracy. More information will be released over time. The opportunities arise from overcoming constraints by incomplete information by proposing a new way of collecting, enhancing and sharing information and knowledge in a more democratic and transparent manner. The intention is to develop interactive knowledge platform where any interested person can share and access information on land deals, their link to involved stakeholders, and their embedding into a geographical context. By making use of new ICT technologies that are more and more in the reach of local stakeholders, as well as open access and web-based spatial information systems, it will become possible to create a dynamic database containing spatial explicit data. Feeding in data by a large number of stakeholders, increasingly also by means of new mobile ITC technologies, will open up new opportunities to analyse, monitor and assess highly dynamic trends of land acquisition and rural transformation.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
The dataset is based on samples collected in the summer of 1998 in the Western Black Sea in front of Bulgaria coast. The whole dataset is composed of 69 samples (from 22 stations of National Monitoring Grid) with data of mesozooplankton species composition abundance and biomass. Samples were collected in discrete layers 0-10, 0-20, 0-50, 10-25, 25-50, 50-100 and from bottom up to the surface at depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Lyudmila Kamburska using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Lyudmila Kamburska using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).
Resumo:
The dataset is based on samples collected in the summer of 2001 in the Western Black Sea in front of Bulgaria coast (transects at c. Kaliakra and c. Galata). The whole dataset is composed of 26 samples (from 10 stations of National Monitoring Grid) with data of mesozooplankton species composition abundance and biomass. Samples were collected in discrete layers 0-10, 10-20, 10-25, 25-50, 50-75, 75-90. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Lyudmila Kamburska and Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Lyudmila Kamburska and Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).
Resumo:
The dataset is based on samples collected in the summer of 2000 in the Western Black Sea in front of Bulgaria coast. The whole dataset is composed of 84 samples (from 31 stations of National Monitoring Grid) with data of mesozooplankton species composition abundance and biomass. Samples were collected in discrete layers 0-10, 0-20, 0-50, 10-25, 25-50, 50-100 and from bottom up to the surface at depths depending on water column stratification and the thermocline depth. The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Lyudmila Kamburska using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Lyudmila Kamburska using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).