945 resultados para Wireless inertial measurement unit (WIMU)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four hybrid algorithms has been developed for the solution of the unit commitment problem. They use simulated annealing as one of the constituent techniques, and produce lower cost schedules; two of them have less overhead than other soft computing techniques. They are also more robust to the choice of parameters. A special technique avoids the generating of infeasible schedules, and thus reduces computation time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gastric motility disorders, including delayed gastric emptying (gastroparesis), impaired postprandial fundic relaxation, and gastric myoelectrical disorders, can occur in type 1 diabetes, chronic renal failure, and functional dyspepsia (FD). Symptoms like upper abdominal pain, early satiation, bloating, nausea and vomiting may be related to gastroparesis. Diabetic gastroparesis is related to autonomic neuropathy. Scintigraphy is the gold standard in measuring gastric emptying, but it is expensive, requires specific equipment, and exposes patients to radiation. It also gives information about the intragastric distribution of the test meal. The 13C-octanoic acid breath test (OBT) is an alternative, indirect method of measuring gastric emptying with a stable isotope. Electrogastrography (EGG) registers the slow wave originating in the pacemaker area of the stomach and regulating the peristaltic contractions of the antrum. This study compares these three methods of measuring gastric motility in patients with type 1 diabetes, functional dyspepsia, and chronic renal failure. Currently no effective drugs for treating gastric motility disorders are available. We studied the effect of nizatidine on gastric emptying, because in preliminary studies this drug has proven to have a prokinetic effect due to its cholinergic properties. Of the type 1 patients, 26% had delayed gastric emptying of solids as measured by scintigraphy. Abnormal intragastric distribution of the test meal occurred in 37% of the patients, indicating impaired fundic relaxation. The autonomic neuropathy score correlated positively with the gastric emptying rate of solids (P = 0.006), but HbA1C, plasma glucose levels, or abdominal symptoms were unrelated to gastric emptying or intragastric distribution of the test meal. Gastric emptying of both solids and liquids was normal in all FD patients but abnormal intragastric distribution occurred in 38% of the patients. Nizatidine improved symptom scores and quality of life in FD patients, but not significantly. Instead of enhancing, nizatidine slowed gastric emptying in FD patients (P < 0.05). No significant difference appeared in the frequency of the gastric slow waves measured by EGG in the patients and controls. The correlation between gastric half-emptying times of solids measured by scintigraphy and OBT was poor both in type 1 diabetes and FD patients. According to this study, dynamic dual-tracer scintigraphy is more accurate than OBT or EGG in measuring gastric emptying of solids. Additionally it provides information about gastric emptying of liquids and the intragastric distribution of the ingested test meal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the measurements of Alcock and Zador, Grundy et al. estimated an uncertainty of the order of +/- 5 kJ mol(-1) for the standard Gibbs energy of formation of MnO in a recent assessment. Since the evaluation of thermodynamic data for the higher oxides Mn3O4, Mn2O3, and MnO2 depends on values for MnO, a redetermination of its Gibbs energy of formation was undertaken in the temperature range from 875 to 1300 K using a solid-state electrochemical cell incorporating yttria-doped thoria (YDT) as the solid electrolyte and Fe + Fe1-delta O as the reference electrode. The cell can be presented as Pt, Mn + MnO/YDT/Fe + Fe1+delta O, Pt Since the metals Fe and Mn undergo phase transitions in the temperature range of measurement, the reversible emf of the cell is represented by the three linear segments. Combining the emf with the oxygen potential for the reference electrode, the standard Gibbs energy of formation of MnO from alpha-Mn and gaseous diatomic oxygen in the temperature range from 875 to 980 K is obtained as: Delta G(f)(o)/Jmol(-1)(+/- 250) = -385624 + 73.071T From 980 to 1300 K the Gibbs energy of formation of MnO from beta-Mn and oxygen gas is given by: Delta G(f)(o)/Jmol(-1)(+/- 250) = -387850 + 75.36T The new data are in excellent agreement with the earlier measurements of Alcock and Zador. Grundy et al. incorrectly analyzed the data of Alcock and Zador showing relatively large difference (+/- 5 kJ mol(-1)) in Gibbs energies of MnO from their two cells with Fe + Fe1-delta O and Ni + NiO as reference electrodes. Thermodynamic data for MnO is reassessed in the light of the new measurements. A table of refined thermodynamic data for MnO from 298.15 to 2000 K is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have measured hyperfine structure in the first-excited P state (D lines) of all the naturally occurring alkali atoms. We use high-resolution laser spectroscopy to resolve hyperfine transitions, and measure intervals by locking the frequency shift produced by an acousto-optic modulator to the difference between two transitions. In most cases, the hyperfine coupling constants derived from our measurements improve previous values significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The differential encoding/decoding setup introduced by Kiran et at, Oggier et al and Jing et al for wireless relay networks that use codebooks consisting of unitary matrices is extended to allow codebooks consisting of scaled unitary matrices. For such codebooks to be used in the Jing-Hassibi protocol for cooperative diversity, the conditions that need to be satisfied by the relay matrices and the codebook are identified. A class of previously known rate one, full diversity, four-group encodable and four-group decodable Differential Space-Time Codes (DSTCs) is proposed for use as Distributed DSTCs (DDSTCs) in the proposed set up. To the best of our knowledge, this is the first known low decoding complexity DDSTC scheme for cooperative wireless networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optimization in energy consumption of the existing synchronization mechanisms can lead to substantial gains in terms of network life in Wireless Sensor Networks (WSNs). In this paper, we analyze ERBS and TPSN, two existing synchronization algorithms for WSNs which use widely different approach, and compare their performance in large scale WSNs each of which consists of different type of platform and has varying node density. We, then, propose a novel algorithm, PROBESYNC, which takes advantage of differences in power required to transmit and receive a message on ERBS and TPSN and leverages the shortcomings of each of these algorithms. This leads to considerable improvement in energy conservation and enhanced life of large scale WSNs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To perform spectral analysis of noise generated by equipments and activities in a level III neonatal intensive care unit (NICU) and measure the real time sequential hourly noise levels over a 15 day period. Methods Noise generated in the NICU by individual equipments and activities were recorded with a digital spectral sound analyzer to perform spectral analysis over 0.5–8 KHz. Sequential hourly noise level measurements in all the rooms of the NICU were done for 15 days using a digital sound pressure level meter. Independent sample t test and one way ANOVA were used to examine the statistical significance of the results. The study has a 90% power to detect at least 4 dB differences from the recommended maximum of 50 dB with 95 % confidence. Results The mean noise levels in the ventilator room and stable room were 19.99 dB (A) sound pressure level (SPL) and 11.81 dB (A) SPL higher than the maximum recommended of 50 dB (A) respectively (p < 0.001). The equipments generated 19.11 dB SPL higher than the recommended norms in 1–8 KHz spectrum. The activities generated 21.49 dB SPL higher than the recommended norms in 1–8 KHz spectrum (p< 0.001). The ventilator and nebulisers produced excess noise of 8.5 dB SPL at the 0.5 KHz spectrum.Conclusion Noise level in the NICU is unacceptably high. Spectral analysis of equipment and activity noise have shown noise predominantly in the 1–8 KHz spectrum. These levels warrant immediate implementation of noise reduction protocols as a standard of care in the NICU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years there has been growing interest in selecting suitable wood raw material to increase end product quality and to increase the efficiency of industrial processes. Genetic background and growing conditions are known to affect properties of growing trees, but only a few parameters reflecting wood quality, such as volume and density can be measured on an industrial scale. Therefore research on cellular level structures of trees grown in different conditions is needed to increase understanding of the growth process of trees leading to desired wood properties. In this work the cellular and cell wall structures of wood were studied. Parameters, such as the mean microfibril angle (MFA), the spiral grain angles, the fibre length, the tracheid cell wall thickness and the cross-sectional shape of the tracheid, were determined as a function of distance from the pith towards the bark and mutual dependencies of these parameters were discussed. Samples from fast-grown trees, which belong to a same clone, grown in fertile soil and also from fertilised trees were measured. It was found that in fast-grown trees the mean MFA decreased more gradually from the pith to the bark than in reference stems. In fast-grown samples cells were shorter, more thin-walled and their cross-sections were rounder than in slower-grown reference trees. Increased growth rate was found to cause an increase in spiral grain variation both within and between annual rings. Furthermore, methods for determination of the mean MFA using x-ray diffraction were evaluated. Several experimental arrangements including the synchrotron radiation based microdiffraction were compared. For evaluation of the data analysis procedures a general form for diffraction conditions in terms of angles describing the fibre orientation and the shape of the cell was derived. The effects of these parameters on the obtained microfibril angles were discussed. The use of symmetrical transmission geometry and tangentially cut samples gave the most reliable MFA values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our attention, is focused on designing an optimal procurement mechanism which a buyer can use for procuring multiple units of a homogeneous item based on bids submitted by autonomous, rational, and intelligent suppliers. We design elegant optimal procurement mechanisms for two different situations. In the first situation, each supplier specifies the maximum quantity that can be supplied together with a per unit price. For this situation, we design an optimal mechanism S-OPT (Optimal with Simple bids). In the more generalized case, each supplier specifies discounts based on the volume of supply. In this case, we design an optimal mechanism VD-OPT (Optimal with Volume Discount, bids). The VD-OPT mechanism uses the S-OPT mechanism as a building block. The proposed mechanisms minimize the cost to the buyer, satisfying at the same time, (a) Bayesian, incentive compatibility and (b) interim individual rationality.