918 resultados para Node-Depth Encoding
Resumo:
In wireless sensor networks, the routing algorithms currently available assume that the sensor nodes are stationary. Therefore when mobility modulation is applied to the wireless sensor networks, most of the current routing algorithms suffer from performance degradation. The path breaks in mobile wireless networks are due to the movement of mobile nodes, node failure, channel fading and shadowing. It is desirable to deal with dynamic topology changes with optimal effort in terms of resource and channel utilization. As the nodes in wireless sensor medium make use of wireless broadcast to communicate, it is possible to make use of neighboring node information to recover from path failure. Cooperation among the neighboring nodes plays an important role in the context of routing among the mobile nodes. This paper proposes an enhancement to an existing protocol for accommodating node mobility through neighboring node information while keeping the utilization of resources to a minimum.
Resumo:
Sensor networks are one of the fastest growing areas in broad of a packet is in transit at any one time. In GBR, each node in the network can look at itsneighbors wireless ad hoc networking (? Eld. A sensor node, typically'hop count (depth) and use this to decide which node to forward contains signal-processing circuits, micro-controllers and a the packet on to. If the nodes' power level drops below a wireless transmitter/receiver antenna. Energy saving is one certain level it will increase the depth to discourage trafiE of the critical issue for sensor networks since most sensors are equipped with non-rechargeable batteries that have limitedlifetime. Routing schemes are used to transfer data collectedby sensor nodes to base stations. In the literature many routing protocols for wireless sensor networks are suggested. In this work, four routing protocols for wireless sensor networks viz Flooding, Gossiping, GBR and LEACH have been simulated using TinyOS and their power consumption is studied using PowerTOSSIM. A realization of these protocols has beencarried out using Mica2 Motes.
Resumo:
Mobile Ad-hoc Networks (MANETS) consists of a collection of mobile nodes without having a central coordination. In MANET, node mobility and dynamic topology play an important role in the performance. MANET provide a solution for network connection at anywhere and at any time. The major features of MANET are quick set up, self organization and self maintenance. Routing is a major challenge in MANET due to it’s dynamic topology and high mobility. Several routing algorithms have been developed for routing. This paper studies the AODV protocol and how AODV is performed under multiple connections in the network. Several issues have been identified. The bandwidth is recognized as the prominent factor reducing the performance of the network. This paper gives an improvement of normal AODV for simultaneous multiple connections under the consideration of bandwidth of node.
Resumo:
Soil microorganisms play a main part in organic matter decomposition and are consequently necessary to soil ecosystem processes maintaining primary productivity of plants. In light of current concerns about the impact of cultivation and climate change on biodiversity and ecosystem performance, it is vital to expand a complete understanding of the microbial community ecology in our soils. In the present study we measured the depth wise profile of microbial load in relation with important soil physicochemical characteristics (soil temperature, soil pH, moisture content, organic carbon and available NPK) of the soil samples collected from Mahatma Gandhi University Campus, Kottayam (midland region of Kerala). Soil cores (30 cm deep) were taken and the cores were separated into three 10-cm depths to examine depth wise distribution. In the present study, bacterial load ranged from 141×105 to 271×105 CFU/g (10cm depth), from 80×105 to 131×105 CFU/g (20cm depth) and from 260×104 to 47×105 CFU/g (30cm depth). Fungal load varies from 124×103 to 27×104 CFU/g, from 61×103 to110×103 CFU/g and from 16×103 to 49×103 CFU/g at 10, 20 and 30 cm respectively. Actinomycetes count ranged from 129×103 to 60×104 CFU/g (10cm), from 70×103 to 31×104 CFU/g (20cm) and from 14×103 to 66×103 CFU/g (30cm). The study revealed that there was a significant difference in the depthwise distribution of microbial load and soil physico-chemical properties. Bacterial, fungal and actinomycetes load showed a decreasing trend with increasing depth at all the sites. Except pH all other physicochemical properties showed decreasing trend with increasing depth. The vertical profile of total microbial load was well matched with the depthwise profiles of soil nutrients and organic carbon that is microbial load was highest at the soil surface where organics and nutrients were highest
Resumo:
In wireless sensor networks, the routing algorithms currently available assume that the sensor nodes are stationary. Therefore when mobility modulation is applied to the wireless sensor networks, most of the current routing algorithms suffer from performance degradation. The path breaks in mobile wireless networks are due to the movement of mobile nodes, node failure, channel fading and shadowing. It is desirable to deal with dynamic topology changes with optimal effort in terms of resource and channel utilization. As the nodes in wireless sensor medium make use of wireless broadcast to communicate, it is possible to make use of neighboring node information to recover from path failure. Cooperation among the neighboring nodes plays an important role in the context of routing among the mobile nodes. This paper proposes an enhancement to an existing protocol for accommodating node mobility through neighboring node information while keeping the utilization of resources to a minimum.
Resumo:
One of the major applications of underwater acoustic sensor networks (UWASN) is ocean environment monitoring. Employing data mules is an energy efficient way of data collection from the underwater sensor nodes in such a network. A data mule node such as an autonomous underwater vehicle (AUV) periodically visits the stationary nodes to download data. By conserving the power required for data transmission over long distances to a remote data sink, this approach extends the network life time. In this paper we propose a new MAC protocol to support a single mobile data mule node to collect the data sensed by the sensor nodes in periodic runs through the network. In this approach, the nodes need to perform only short distance, single hop transmission to the data mule. The protocol design discussed in this paper is motivated to support such an application. The proposed protocol is a hybrid protocol, which employs a combination of schedule based access among the stationary nodes along with handshake based access to support mobile data mules. The new protocol, RMAC-M is developed as an extension to the energy efficient MAC protocol R-MAC by extending the slot time of R-MAC to include a contention part for a hand shake based data transfer. The mobile node makes use of a beacon to signal its presence to all the nearby nodes, which can then hand-shake with the mobile node for data transfer. Simulation results show that the new protocol provides efficient support for a mobile data mule node while preserving the advantages of R-MAC such as energy efficiency and fairness.
Resumo:
Sensor networks are one of the fastest growing areas in broad of a packet is in transit at any one time. In GBR, each node in the network can look at itsneighbors wireless ad hoc networking (? Eld. A sensor node, typically'hop count (depth) and use this to decide which node to forward contains signal-processing circuits, micro-controllers and a the packet on to. If the nodes' power level drops below a wireless transmitter/receiver antenna. Energy saving is one certain level it will increase the depth to discourage trafiE of the critical issue forfor sensor networks since most sensors are equipped with non-rechargeable batteries that have limited lifetime.
Resumo:
In wireless sensor networks, the routing algorithms currently available assume that the sensor nodes are stationary. Therefore when mobility modulation is applied to the wireless sensor networks, most of the current routing algorithms suffer from performance degradation. The path breaks in mobile wireless networks are due to the movement of mobile nodes, node failure, channel fading and shadowing. It is desirable to deal with dynamic topology changes with optimal effort in terms of resource and channel utilization. As the nodes in wireless sensor medium make use of wireless broadcast to communicate, it is possible to make use of neighboring node information to recover from path failure. Cooperation among the neighboring nodes plays an important role in the context of routing among the mobile nodes. This paper proposes an enhancement to an existing protocol for accommodating node mobility through neighboring node information while keeping the utilization of resources to a minimum.
Resumo:
Wireless sensor networks monitor their surrounding environment for the occurrence of some anticipated phenomenon. Most of the research related to sensor networks considers the static deployment of sensor nodes. Mobility of sensor node can be considered as an extra dimension of complexity, which poses interesting and challenging problems. Node mobility is a very important aspect in the design of effective routing algorithm for mobile wireless networks. In this work we intent to present the impact of different mobility models on the performance of the wireless sensor networks. Routing characteristics of various routing protocols for ad-hoc network were studied considering different mobility models. Performance metrics such as end-to-end delay, throughput and routing load were considered and their variations in the case of mobility models like Freeway, RPGM were studied. This work will be useful to figure out the characteristics of routing protocols depending on the mobility patterns of sensors
Resumo:
Retrieval of similar anatomical structures of brain MR images across patients would help the expert in diagnosis of diseases. In this paper, modified local binary pattern with ternary encoding called modified local ternary pattern (MOD-LTP) is introduced, which is more discriminant and less sensitive to noise in near-uniform regions, to locate slices belonging to the same level from the brain MR image database. The ternary encoding depends on a threshold, which is a user-specified one or calculated locally, based on the variance of the pixel intensities in each window. The variancebased local threshold makes the MOD-LTP more robust to noise and global illumination changes. The retrieval performance is shown to improve by taking region-based moment features of MODLTP and iteratively reweighting the moment features of MOD-LTP based on the user’s feedback. The average rank obtained using iterated and weighted moment features of MOD-LTP with a local variance-based threshold, is one to two times better than rotational invariant LBP (Unay, D., Ekin, A. and Jasinschi, R.S. (2010) Local structure-based region-of-interest retrieval in brain MR images. IEEE Trans. Inf. Technol. Biomed., 14, 897–903.) in retrieving the first 10 relevant images
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
This study reports the details of the finite element analysis of eleven shear critical partially prestressed concrete T-beams having steel fibers over partial or full depth. Prestressed T-beams having a shear span to depth ratio of 2.65 and 1.59 that failed in shear have been analyzed using the ‘ANSYS’ program. The ‘ANSYS’ model accounts for the nonlinearity, such as, bond-slip of longitudinal reinforcement, postcracking tensile stiffness of the concrete, stress transfer across the cracked blocks of the concrete and load sustenance through the bridging action of steel fibers at crack interface. The concrete is modeled using ‘SOLID65’- eight-node brick element, which is capable of simulating the cracking and crushing behavior of brittle materials. The reinforcement such as deformed bars, prestressing wires and steel fibers have been modeled discretely using ‘LINK8’ – 3D spar element. The slip between the reinforcement (rebars, fibers) and the concrete has been modeled using a ‘COMBIN39’- nonlinear spring element connecting the nodes of the ‘LINK8’ element representing the reinforcement and nodes of the ‘SOLID65’ elements representing the concrete. The ‘ANSYS’ model correctly predicted the diagonal tension failure and shear compression failure of prestressed concrete beams observed in the experiment. The capability of the model to capture the critical crack regions, loads and deflections for various types of shear failures in prestressed concrete beam has been illustrated.
Resumo:
The main objective of this thesis is to develop a compact chipless RFID tag with high data encoding capacity. The design and development of chipless RFID tag based on multiresonator and multiscatterer methods are presented first. An RFID tag using using SIR capable of 79bits is proposed. The thesis also deals with some of the properties of SIR like harmonic separation, independent control on resonant modes and the capability to change the electrical length. A chipless RFID reader working in a frequency band of 2.36GHz to 2.54GHz has been designed to show the feasibility of the RFID system. For a practical system, a new approach based on UWB Impulse Radar (UWB IR) technology is employed and the decoding methods from noisy backscattered signal are successfully demonstrated. The thesis also proposes a simple calibration procedure, which is able to decode the backscattered signal up to a distance of 80cm with 1mW output power.
Resumo:
In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.
Resumo:
In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.