943 resultados para In-sensor-experiment
Resumo:
Measurement of nitrifiable nitrogen contained in wastewater by combining the existing respirometric and titrimetric principles is reported. During an in-sensor-experiment using nitrifying activated sludge. both the dissolved oxygen (DO) and pH in the mixed liquor were measured, and the FH was controlled at a set-point through titration of base or acid. A combination of the oxygen uptake rate (OUR), which was obtained from the measured DO signal, and the titration data allowed calculation of the nitrifiable nitrogen and the short-term biological oxygen demand (BOD) of the wastewater sample that was initially added to the sludge. The calculation was based solely on stoichiometric relationships. The approach was preliminarily tested with two types of wastewaters using a prototype sensor. Good correlation was obtained. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
This letter describes a data telemetry biomedical experiment. An implant, consisting of a biometric data sensor, electronics, an antenna, and a biocompatible capsule, is described. All the elements were co-designed in order to maximize the transmission distance. The device was implanted in a pig for an in vivo experiment of temperature monitoring.
Resumo:
Most current-generation Wireless Sensor Network (WSN) nodes are equipped with multiple sensors of various types, and therefore support for multi-tasking and multiple concurrent applications is becoming increasingly common. This trend has been fostering the design of WSNs allowing several concurrent users to deploy applications with dissimilar requirements. In this paper, we extend the advantages of a holistic programming scheme by designing a novel compiler-assisted scheduling approach (called REIS) able to identify and eliminate redundancies across applications. To achieve this useful high-level optimization, we model each user application as a linear sequence of executable instructions. We show how well-known string-matching algorithms such as the Longest Common Subsequence (LCS) and the Shortest Common Super-sequence (SCS) can be used to produce an optimal merged monolithic sequence of the deployed applications that takes into account embedded scheduling information. We show that our approach can help in achieving about 60% average energy savings in processor usage compared to the normal execution of concurrent applications.
Resumo:
We present results from 50-round market experiments in which firms decide repeatedly both on price and quantity of a completely perishable good. Each firm has capacity to serve the whole market. The stage game does not have an equilibrium in pure strategies. We run experiments for markets with two and three identical firms. Firms tend to cooperate to avoid fights, but when they fight bankruptcies are rather frequent. On average, pricing behavior is closer to that for pure quantity than for pure price competition and price and efficiency levels are higher for two than for three firms. Consumer surplus increases with the number of firms, but unsold production leads to higher efficiency losses with more firms. Over time prices tend to the highest possible one for markets both with two and three firms.
Resumo:
In the present study, to shed light on a role of positional error correction mechanism and prediction mechanism in the proactive control discovered earlier, we carried out a visual tracking experiment, in which the region where target was shown, was regulated in a circular orbit. Main results found in this research were following. Recognition of a time step, obtained from the environmental stimuli, is required for the predictive function. The period of the rhythm in the brain obtained from environmental stimuli is shortened about 10%, when the visual information is cut-off. The shortening of the period of the rhythm in the brain accelerates the motion as soon as the visual information is cut-off, and lets the hand motion precedes the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand precedes in average the target when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
We report the J/psi -> e(+)e(-) and the psi` -> e(+)e(-) production cross sections in the PHENIX experiment at RHIC. The first measurements of the production cross sections of the psi` and the psi` over the J/psi, will contribute to the clarification of the theoretical understanding of the J/psi meson production. The inclusive J/psi polarization through the same decay channel is also presented, showing a trend of slightly longitudinal polarization for p(T) <5 GeV/c.
Resumo:
Reactive transport modelling was used to simulate solute transport, thermodynamic reactions, ion exchange and biodegradation in the Porewater Chemistry (PC) experiment at the Mont Terri Rock Laboratory. Simulations show that the most important chemical processes controlling the fluid composition within the borehole and the surrounding formation during the experiment are ion exchange, biodegradation and dissolution/precipitation reactions involving pyrite and carbonate minerals. In contrast, thermodynamic mineral dissolution/precipitation reactions involving alumo-silicate minerals have little impact on the fluid composition on the time-scale of the experiment. With the accurate description of the initial chemical condition in the formation in combination with kinetic formulations describing the different stages of bacterial activities, it has been possible to reproduce the evolution of important system parameters, such as the pH, redox potential, total organic C. dissolved inorganic C and SO(4) concentration. Leaching of glycerol from the pH-electrode may be the primary source of organic material that initiated bacterial growth, which caused the chemical perturbation in the borehole. Results from these simulations are consistent with data from the over-coring and demonstrate that the Opalinus Clay has a high buffering capacity in terms of chemical perturbations caused by bacterial activity. This buffering capacity can be attributed to the carbonate system as well as to the reactivity of clay surfaces.
Resumo:
OBJECTIVES: To evaluate the influence of flap tension on the tearing characteristics of mucosal tissue samples in relation to various suture and needle characteristics. MATERIAL AND METHODS: Lining and masticatory mucosal tissue samples obtained from pig jaws were prepared for in vitro testing. Tension tearing diagrams of 60 experiments were traced for 3-0, 5-0 and 7-0 sutures with applied forces up to 20 N. In the second part, the same experiments were repeated with 100 diagrams to test the influence of needle characteristics with 5-0 and 6-0 sutures using only gingival tissue samples. RESULTS: 3-0 sutures mainly lead to tissue breakage at an average of 13.4 N. In contrast, 7-0 sutures only resulted in breakage of the thread at a mean applied force of 3.7 N. With 5-0 sutures, both events occurred at random at a mean force of 14.6 N. Irrespective of the needle characteristics, the mean breaking force for gingival samples with 5-0 and 6-0 sutures was approximately 10 N. CONCLUSIONS: Tissue trauma may be reduced by choosing finer suture diameters, because thinner (6-0, 7-0) sutures lead to thread breakage rather than tissue breakage.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.
Resumo:
The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.