291 resultados para Websocket, Node.js, JSON
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Mobile ad-hoc network is a wireless ad-hoc network with dynamic network topology. The Dynamicity, due to the random node movement, and scarcity of resources lead to a challenge in monitoring the nodes in a MANET. Monitoring the lack of resources (bandwidth, buffer, and energy), misbehavior, and mobility at node level remains, a challenge. In a MANET the proposed protocol uses both static as well as mobile agents, where the mobile agents migrate to different clusters of the zones respectively, collect the node status information periodically, and provide a high level information to the static agent (which resides at the central node) by analyzing the raw information at the nodes. This, in turn, reduces the network traffic and conserves the workload of the central node, where a static agent is available with high level information and in coordination with other modules. The protocol has been tested in different size MANETs with variable number of nodes and applications. The results shown in the simulation indicates the effectiveness of the protocol.
Resumo:
Background: Insulin like growth factor binding proteins modulate the mitogenic and pro survival effects of IGF. Elevated expression of IGFBP2 is associated with progression of tumors that include prostate, ovarian, glioma among others. Though implicated in the progression of breast cancer, the molecular mechanisms involved in IGFBP2 actions are not well defined. This study investigates the molecular targets and biological pathways targeted by IGFBP2 in breast cancer. Methods: Transcriptome analysis of breast tumor cells (BT474) with stable knockdown of IGFBP2 and breast tumors having differential expression of IGFBP2 by immunohistochemistry was performed using microarray. Differential gene expression was established using R-Bioconductor package. For validation, gene expression was determined by qPCR. Inhibitors of IGF1R and integrin pathway were utilized to study the mechanism of regulation of beta-catenin. Immunohistochemical and immunocytochemical staining was performed on breast tumors and experimental cells, respectively for beta-catenin and IGFBP2 expression. Results: Knockdown of IGFBP2 resulted in differential expression of 2067 up regulated and 2002 down regulated genes in breast cancer cells. Down regulated genes principally belong to cell cycle, DNA replication, repair, p53 signaling, oxidative phosphorylation, Wnt signaling. Whole genome expression analysis of breast tumors with or without IGFBP2 expression indicated changes in genes belonging to Focal adhesion, Map kinase and Wnt signaling pathways. Interestingly, IGFBP2 knockdown clones showed reduced expression of beta-catenin compared to control cells which was restored upon IGFBP2 re-expression. The regulation of beta-catenin by IGFBP2 was found to be IGF1R and integrin pathway dependent. Furthermore, IGFBP2 and beta-catenin are co-ordinately overexpressed in breast tumors and correlate with lymph node metastasis. Conclusion: This study highlights regulation of beta-catenin by IGFBP2 in breast cancer cells and most importantly, combined expression of IGFBP2 and beta-catenin is associated with lymph node metastasis of breast tumors.
Resumo:
Noninvasive or minimally invasive identification of sentinel lymph node (SLN) is essential to reduce the surgical effects of SLN biopsy. Photoacoustic (PA) imaging of SLN in animal models has shown its promise for clinical use in the future. Here, we present a Monte Carlo simulation for light transport in the SLN for various light delivery configurations with a clinical ultrasound probe. Our simulation assumes a realistic tissue layer model and also can handle the transmission/reflectance at SLN-tissue boundary due to the mismatch of refractive index. Various light incidence angles show that for deeply situated SLNs the maximum absorption of light in the SLN is for normal incidence. We also show that if a part of the diffused reflected photons is reflected back into the skin using a reflector, the absorption of light in the SLN can be increased significantly to enhance the PA signal. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of optimal placement of relay nodes along the straight line joining a source node and a destination node. The achievable rate formulas that we utilize are for full-duplex radios at the relays and decode-and-forward relaying. For the single relay case, and individual power constraints at the source node and the relay node, we provide explicit formulas for the optimal relay location and the optimal power allocation to the source-relay channel, for the exponential and the power-law path-loss channel models. For the multiple relay case, we consider exponential path-loss and a total power constraint over the source and the relays, and derive an optimization problem, the solution of which provides the optimal relay locations. Numerical results suggest that at low attenuation the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that a constant rate independent of the attenuation in the network can be achieved by placing a large enough number of relay nodes uniformly between the source and the destination, under the exponential path-loss model with total power constraint.
Resumo:
A ubiquitous network plays a critical role to provide rendered services to ubiquitous application running nodes. To provide appropriate resources the nodes are needed to be monitored continuously. Monitoring a node in ubiquitous network is challenging because of dynamicity and heterogeneity of the ubiquitous network. The network monitor has to monitor resource parameters, like data rate, delay and throughput, as well as events such as node failure, network failure and fault in the system to curb the system failure. In this paper, we propose a method to develop a ubiquitous system monitoring protocol using agents. Earlier works on network monitoring using agents consider that the agents are designed for particular network. While in our work the heterogeneity property of the network has been considered. We have shown that the nodes' behaviour can be easily monitored by using agents (both static and mobile agent). The past behavior of the application and network, and past history of the Unode and the predecessor are taken into consideration to help SA to take appropriate decision during the time of emergency situation like unavailability of resources at the local administration, and to predict the migration of the Unode based on the previous node history. The results obtained in the simulation reflects the effectiveness of the technique.
Resumo:
The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.
Resumo:
Opportunistic selection in multi-node wireless systems improves system performance by selecting the ``best'' node and by using it for data transmission. In these systems, each node has a real-valued local metric, which is a measure of its ability to improve system performance. Our goal is to identify the best node, which has the largest metric. We propose, analyze, and optimize a new distributed, yet simple, node selection scheme that combines the timer scheme with power control. In it, each node sets a timer and transmit power level as a function of its metric. The power control is designed such that the best node is captured even if. other nodes simultaneously transmit with it. We develop several structural properties about the optimal metric-to-timer-and-power mapping, which maximizes the probability of selecting the best node. These significantly reduce the computational complexity of finding the optimal mapping and yield valuable insights about it. We show that the proposed scheme is scalable and significantly outperforms the conventional timer scheme. We investigate the effect of. and the number of receive power levels. Furthermore, we find that the practical peak power constraint has a negligible impact on the performance of the scheme.
Resumo:
The current day networks use Proactive networks for adaption to the dynamic scenarios. The use of cognition technique based on the Observe, Orient, Decide and Act loop (OODA) is proposed to construct proactive networks. The network performance degradation in knowledge acquisition and malicious node presence is a problem that exists. The use of continuous time dynamic neural network is considered to achieve cognition. The variance in service rates of user nodes is used to detect malicious activity in heterogeneous networks. The improved malicious node detection rates are proved through the experimental results presented in this paper. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
We report a circuit technique to measure the on-chip delay of an individual logic gate (both inverting and non-inverting) in its unmodified form using digitally reconfigurable ring oscillator (RO). Solving a system of linear equations with different configuration setting of the RO gives delay of an individual gate. Experimental results from a test chip in 65nm process node show the feasibility of measuring the delay of an individual inverter to within 1pS accuracy. Delay measurements of different nominally identical inverters in close physical proximity show variations of up to 26% indicating the large impact of local or within-die variations.
Resumo:
This article presents the results of probabilistic seismic hazard analysis (PSHA) for Bangalore, South India. Analyses have been carried out considering the seismotectonic parameters of the region covering a radius of 350 km keeping Bangalore as the center. Seismic hazard parameter `b' has been evaluated considering the available earthquake data using (1) Gutenberg-Richter (G-R) relationship and (2) Kijko and Sellevoll (1989, 1992) method utilizing extreme and complete catalogs. The `b' parameter was estimated to be 0.62 to 0.98 from G-R relation and 0.87 +/- A 0.03 from Kijko and Sellevoll method. The results obtained are a little higher than the `b' values published earlier for southern India. Further, probabilistic seismic hazard analysis for Bangalore region has been carried out considering six seismogenic sources. From the analysis, mean annual rate of exceedance and cumulative probability hazard curve for peak ground acceleration (PGA) and spectral acceleration (Sa) have been generated. The quantified hazard values in terms of the rock level peak ground acceleration (PGA) are mapped for 10% probability of exceedance in 50 years on a grid size of 0.5 km x 0.5 km. In addition, Uniform Hazard Response Spectrum (UHRS) at rock level is also developed for the 5% damping corresponding to 10% probability of exceedance in 50 years. The peak ground acceleration (PGA) value of 0.121 g obtained from the present investigation is slightly lower (but comparable) than the PGA values obtained from the deterministic seismic hazard analysis (DSHA) for the same area. However, the PGA value obtained in the current investigation is higher than PGA values reported in the global seismic hazard assessment program (GSHAP) maps of Bhatia et al. (1999) for the shield area.
Resumo:
This paper presents the results of laboratory investigation carried out on Ahmedabad sand on the liquefaction and pore water pressure generation during strain controled cyclic loading. Laboratory experiments were carried out on representative natural sand samples (base sand) collected from earthquake-affected area of Ahmedabad City of Gujarat State in India. A series of strain controled cyclic triaxial tests were carried out on isotropically compressed samples to study the influence of different parameters such as shear strain amplitude, initial effective confining pressure, relative density and percentage of non-plastic fines on the behavior of liquefaction and pore water pressure generation. It has been observed from the laboratory investigation that the potential for liquefaction of the sandy soils depends on the shear strain amplitude, initial relative density, initial effective confining pressure and non-plastic fines. In addition, an empirical relationship between pore pressure ratio and cycle ratio independent of the number of cycles of loading, relative density, confining pressure, amplitude of shear strain and non-plastic fines has been proposed.
Resumo:
Lasers are very efficient in heating localized regions and hence they find a wide application in surface treatment processes. The surface of a material can be selectively modified to give superior wear and corrosion resistance. In laser surface-melting and welding problems, the high temperature gradient prevailing in the free surface induces a surface-tension gradient which is the dominant driving force for convection (known as thermo-capillary or Marangoni convection). It has been reported that the surface-tension driven convection plays a dominant role in determining the melt pool shape. In most of the earlier works on laser-melting and related problems, the finite difference method (FDM) has been used to solve the Navier Stokes equations [1]. Since the Reynolds number is quite high in these cases, upwinding has been used. Though upwinding gives physically realistic solutions even on a coarse grid, the results are inaccurate. McLay and Carey have solved the thermo-capillary flow in welding problems by an implicit finite element method [2]. They used the conventional Galerkin finite element method (FEM) which requires that the pressure be interpolated by one order lower than velocity (mixed interpolation). This restricts the choice of elements to certain higher order elements which need numerical integration for evaluation of element matrices. The implicit algorithm yields a system of nonlinear, unsymmetric equations which are not positive definite. Computations would be possible only with large mainframe computers.Sluzalec [3] has modeled the pulsed laser-melting problem by an explicit method (FEM). He has used the six-node triangular element with mixed interpolation. Since he has considered the buoyancy induced flow only, the velocity values are small. In the present work, an equal order explicit FEM is used to compute the thermo-capillary flow in the laser surface-melting problem. As this method permits equal order interpolation, there is no restriction in the choice of elements. Even linear elements such as the three-node triangular elements can be used. As the governing equations are solved in a sequential manner, the computer memory requirement is less. The finite element formulation is discussed in this paper along with typical numerical results.
Resumo:
Wireless adhoc networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem - the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node. In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels, where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information (CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a c- - orresponding factored class of control poli.