906 resultados para Placement of router nodes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

By using near infrared spectroscopy (NIRS) and by modifying the current Somanetics® optodes being used with the INVOS oximeter, the modified optodes are made to be fairly functional not only across the forehead, but across the hairy regions of the scalp as well. A major problem arises in the positioning of these optodes on the patients scalp and holding them in place while recording data. Another problem arises in the inconsistent repeatability of the trends displayed in the recorded data. A method was developed to facilitate the easy placement of these optodes on the patients scalp keeping in mind thepatient's comfort. The sensitivity of the optodes, too, was improved by incorporating better refined techniques for manufacturing the fiber optic brushes and fixing the same to the optode transmitting and receiving windows. The modified and improved optodes, in the single as well as in the multiplexed modes, were subjected to various tests on different areas of the brain to determine their efficiency and functionality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to analyze the network performance by observing the effect of varying network size and data link rate on one of the most commonly found network configurations. Computer networks have been growing explosively. Networking is used in every aspect of business, including advertising, production, shipping, planning, billing, and accounting. Communication takes place through networks that form the basis of transfer of information. The number and type of components may vary from network to network depending on several factors such as requirement and actual physical placement of the networks. There is no fixed size of the networks and they can be very small consisting of say five to six nodes or very large consisting of over two thousand nodes. The varying network sizes make it very important to study the network performance so as to be able to predict the functioning and the suitability of the network. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected significantly due to the increase in the size of the network and that there exists a correlation between the various parameters and the size of the network. These variations are not only dependent on the magnitude of the change in the actual physical area of the network but also on the data link rate used to connect the various components of the network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of intrusion detection and location identification in the presence of clutter is considered for a hexagonal sensor-node geometry. It is noted that in any practical application,for a given fixed intruder or clutter location, only a small number of neighboring sensor nodes will register a significant reading. Thus sensing may be regarded as a local phenomenon and performance is strongly dependent on the local geometry of the sensor nodes. We focus on the case when the sensor nodes form a hexagonal lattice. The optimality of the hexagonal lattice with respect to density of packing and covering and largeness of the kissing number suggest that this is the best possible arrangement from a sensor network viewpoint. The results presented here are clearly relevant when the particular sensing application permits a deterministic placement of sensors. The results also serve as a performance benchmark for the case of a random deployment of sensors. A novel feature of our analysis of the hexagonal sensor grid is a signal-space viewpoint which sheds light on achievable performance.Under this viewpoint, the problem of intruder detection is reduced to one of determining in a distributed manner, the optimal decision boundary that separates the signal spaces SI and SC associated to intruder and clutter respectively. Given the difficulty of implementing the optimal detector, we present a low-complexity distributive algorithm under which the surfaces SI and SC are separated by a wellchosen hyperplane. The algorithm is designed to be efficient in terms of communication cost by minimizing the expected number of bits transmitted by a sensor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that the number of hops in the path from each sensor to its BS is bounded by h(max), and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: To compare the performance of ultrasound elastography with conventional ultrasound in the assessment of axillary lymph nodes in suspected breast cancer and whether ultrasound elastography as an adjunct to conventional ultrasound can increase the sensitivity of conventional ultrasound used alone. MATERIALS AND METHODS: Fifty symptomatic women with a sonographic suspicion for breast cancer underwent ultrasound elastography of the ipsilateral axilla concurrent with conventional ultrasound being performed as part of triple assessment. Elastograms were visually scored, strain measurements calculated and node area and perimeter measurements taken. Theoretical biopsy cut points were selected. The sensitivity, specificity, positive predictive value (PPV), and negative predictive values (NPV) were calculated and receiver operating characteristic (ROC) analysis was performed and compared for elastograms and conventional ultrasound images with surgical histology as the reference standard. RESULTS: The mean age of the women was 57 years. Twenty-nine out of 50 of the nodes were histologically negative on surgical histology and 21 were positive. The sensitivity, specificity, PPV, and NPV for conventional ultrasound were 76, 78, 70, and 81%, respectively; 90, 86, 83, and 93%, respectively, for visual ultrasound elastography; and for strain scoring, 100, 48, 58 and 100%, respectively. There was no significant difference between any of the node measurements CONCLUSIONS: Initial experience with ultrasound elastography of axillary lymph nodes, showed that it is more sensitive than conventional ultrasound in detecting abnormal nodes in the axilla in cases of suspected breast cancer. The specificity remained acceptable and ultrasound elastography used as an adjunct to conventional ultrasound has the potential to improve the performance of conventional ultrasound alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Thoracic epidural catheters provide the best quality postoperative pain relief for major abdominal and thoracic surgical procedures, but placement is one of the most challenging procedures in the repertoire of an anesthesiologist. Most patients presenting for a procedure that would benefit from a thoracic epidural catheter have already had high resolution imaging that may be useful to assist placement of a catheter. Methods. This retrospective study used data from 168 patients to examine the association and predictive power of epidural-skin distance (ESD) on computed tomography (CT) to determine loss of resistance depth acquired during epidural placement. Additionally, the ability of anesthesiologists to measure this distance was compared to a radiologist, who specializes in spine imaging. Results. There was a strong association between CT measurement and loss of resistance depth (P < 0.0001); the presence of morbid obesity (BMI > 35) changed this relationship (P = 0.007). The ability of anesthesiologists to make CT measurements was similar to a gold standard radiologist (all individual ICCs > 0.9). Conclusions. Overall, this study supports the examination of a recent CT scan to aid in the placement of a thoracic epidural catheter. Making use of these scans may lead to faster epidural placements, fewer accidental dural punctures, and better epidural blockade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blast-induced Traumatic Brain Injury (bTBI) is the signature injury of the Iraq and Afghanistan wars; however, current understanding of bTBI is insufficient. In this study, novel analysis methods were developed to investigate correlations between external pressures and brain injury predictors. Experiments and simulations were performed to analyze placement of helmet-mounted pressure sensors. A 2D Finite Element model of a helmeted head cross-section was loaded with a blast wave. Pressure time-histories for nodes on the inner and outer surfaces of the helmet were cross-correlated to those inside the brain. Parallel physical experiments were carried out with a helmeted headform, pressure sensors, and pressure chamber. These analysis methods can potentially lead to better helmet designs and earlier detection and treatment of bTBI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To evaluate the placement of composite materials by new graduates using three alternative placement techniques.Methods: A cohort of 34 recently qualified graduates were asked to restore class II interproximal cavities in plastic teeth using three different techniques.

(i) A conventional incremental filling technique (Herculite XRV) using increments no larger than 2-mm with an initial layer on the cervical floor of the box of 1-mm.
(ii) Flowable bulk fill technique (Dentsply SDR) bulk fill placement in a 3-mm layer followed by an incremental fill of a microhybrid resin
(iii) Bulk fill (Kerr Sonicfill) which involved restorations placed in a 5-mm layer.

The operators were instructed in each technique, didactically and with a hands-on demonstration, prior to restoration placement.
All restorations were cured according to manufacturer’s recommendations. Each participant restored 3 teeth, 1 tooth per treatment technique.
The restorations were evaluated using modified USPHS criteria to assess both the marginal adaptation and the surface texture of the restorations. Blind evaluations were carried out independently by two examiners with the aid of magnification (loupes X2.5). Examiners were standardized prior to evaluation.
Results: Gaps between the tooth margins and the restoration or between the layers of the restoration were found in 13 of Group (i), 3 of Group (ii), and 4 of Group (iii)
Statistical analysis revealed a significant difference between the incrementally filled group (i) and the flowable bulk-fill group (ii) (p=0.0043) and between the incrementally filled (i) and the bulk fill groups (iii) (p=0.012) and no statistical difference (p=0.69) between the bulk filled groups Conclusions: Bulk fill techniques may result in a more satisfactory seal of the cavity margins when restoring with composite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-sensitive Wireless Sensor Network (WSN) applications require finite delay bounds in critical situations. This paper provides a methodology for the modeling and the worst-case dimensioning of cluster-tree WSNs. We provide a fine model of the worst-case cluster-tree topology characterized by its depth, the maximum number of child routers and the maximum number of child nodes for each parent router. Using Network Calculus, we derive “plug-and-play” expressions for the endto- end delay bounds, buffering and bandwidth requirements as a function of the WSN cluster-tree characteristics and traffic specifications. The cluster-tree topology has been adopted by many cluster-based solutions for WSNs. We demonstrate how to apply our general results for dimensioning IEEE 802.15.4/Zigbee cluster-tree WSNs. We believe that this paper shows the fundamental performance limits of cluster-tree wireless sensor networks by the provision of a simple and effective methodology for the design of such WSNs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phosphorus (P) deficiency is a major constraint to pearl millet (Pennisetum glaucum L.) growth on acid sandy soils of the West African Sahel. To develop cost-effective fertilization strategies for cash poor farmers, experiments with pearl millet were conducted in southwestern Niger. Treatments comprised single superphosphate hill-placed at rates of 1, 3, 5 or 7 kg P ha^−1 factorially combined with broadcast P at a rate of 13 kg ha^−1. Nitrogen was applied as calcium ammonium nitrate at rates of 30 and 45 kg ha^−1. At low soil moisture, placement of single superphosphate in immediate proximity to the seed reduced seedling emergence. Despite these negative effects on germination, P placement resulted in much faster growth of millet seedlings than did broadcast P. With P application, potassium nutrition of millet was improved and seedling nitrogen uptake increased two- to three-fold, indicating that nitrogen was not limiting early millet growth. Averaged over the 1995 and 1996 cropping seasons, placed applications of 3, 5 and 7 kg P ha^−1 led to 72%, 81% and 88% respectively, of the grain yield produced by broadcasting 13 kg P ha^−1. Nitrogen application did not show major effects on grain yield unless P requirements were met. A simple economic analysis revealed that the profitability of P application, defined as additional income per unit of fertilizer, was highest for P placement at 3 and 5 kg ha^−1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reviews variables that influence placement of a hearing impaired child into a special education program instead of being mainstreamed into a public school.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the results of the application of a parallel Genetic Algorithm (GA) in order to design a Fuzzy Proportional Integral (FPI) controller for active queue management on Internet routers. The Active Queue Management (AQM) policies are those policies of router queue management that allow the detection of network congestion, the notification of such occurrences to the hosts on the network borders, and the adoption of a suitable control policy. Two different parallel implementations of the genetic algorithm are adopted to determine an optimal configuration of the FPI controller parameters. Finally, the results of several experiments carried out on a forty nodes cluster of workstations are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An interconnection network with n nodes is four-pancyclic if it contains a cycle of length l for each integer l with 4 <= l <= n. An interconnection network is fault-tolerant four-pancyclic if the surviving network is four-pancyclic in the presence of faults. The fault-tolerant four-pancyclicity of interconnection networks is a desired property because many classical parallel algorithms can be mapped onto such networks in a communication-efficient fashion, even in the presence of failing nodes or edges. Due to some attractive properties as compared with its hypercube counterpart of the same size, the Mobius cube has been proposed as a promising candidate for interconnection topology. Hsieh and Chen [S.Y. Hsieh, C.H. Chen, Pancyclicity on Mobius cubes with maximal edge faults, Parallel Computing, 30(3) (2004) 407-421.] showed that an n-dimensional Mobius cube is four-pancyclic in the presence of up to n-2 faulty edges. In this paper, we show that an n-dimensional Mobius cube is four-pancyclic in the presence of up to n-2 faulty nodes. The obtained result is optimal in that, if n-1 nodes are removed, the surviving network may not be four-pancyclic. (C) 2005 Elsevier B.V. All rights reserved.