989 resultados para Sink nodes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An ammonia loop heat pipe (LHP) with a flat plate evaporator is developed and tested. The device uses a nickel wick encased in an aluminum-stainless steel casing. The loop is tested for various heat loads and different sink temperatures, and it demonstrated reliable startup characteristics. Results with the analysis of the experimental observation indicate that the conductance between the compensation chamber and the heater plate can significantly influence the operating temperatures of the LHP. A mathematical model is also presented which is validated against the experimental observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a novel and efficient algorithm for modelling sub-65 nm clock interconnect-networks in the presence of process variation. We develop a method for delay analysis of interconnects considering the impact of Gaussian metal process variations. The resistance and capacitance of a distributed RC line are expressed as correlated Gaussian random variables which are then used to compute the standard deviation of delay Probability Distribution Function (PDF) at all nodes in the interconnect network. Main objective is to find delay PDF at a cheaper cost. Convergence of this approach is in probability distribution but not in mean of delay. We validate our approach against SPICE based Monte Carlo simulations while the current method entails significantly lower computational cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Characterization of melting process in a Phase Change Material (PCM)-based heat sink with plate fin type thermal conductivity enhancers (TCEs) is numerically studied in this paper. Detailed parametric investigations are performed to find the effect of aspect ratio of enclosure and the applied heat flux on the thermal performance of the heat sinks. Various non-dimensional numbers, such as Nusselt number (Nu), Rayleigh number (Ra), Stefan number (Ste) and Fourier number (Fo) based on a characteristic length scale, are identified as important parameters. The half fin thickness and the fin height are varied to obtain a wide range of aspect ratios of an enclosure. It is found that a single correlation of Nu with Ra is not applicable for all aspect ratios of enclosure with melt convection taken into account. To find appropriate length scales, enclosures with different aspect ratios are divided into three categories, viz. (a) shallow enclosure, (b) rectangular enclosure and (c) tall enclosure. Accordingly, an appropriate characteristic length scale is identified for each type of enclosure and correlation of Nu with Ra based on that characteristic length scale is developed. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a low-complexity algorithm for intrusion detection in the presence of clutter arising from wind-blown vegetation, using Passive Infra-Red (PIR) sensors in a Wireless Sensor Network (WSN). The algorithm is based on a combination of Haar Transform (HT) and Support-Vector-Machine (SVM) based training and was field tested in a network setting comprising of 15-20 sensing nodes. Also contained in this paper is a closed-form expression for the signal generated by an intruder moving at a constant velocity. It is shown how this expression can be exploited to determine the direction of motion information and the velocity of the intruder from the signals of three well-positioned sensors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A bi-level voltage drive circuit for step motors that can provide the required high starting torque is described. In this circuit, microprocessor 8085 and parallel port interface 8255 are used for generating the code sequence. The inverter buffer 74LS06 provides enough drive to a darlington pair transistor. The comparator LM339 is used to compare the required voltage for step motor with the set value. This circuit can be effectively used for step motors having maximum rated current of less than 15 A with proper heat sink.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nearly one fourth of new medicinal molecules are biopharmaceutical (protein, antibody or nucleic acid derivative) based. However, the administration of these compounds is not always that straightforward due to the fragile nature of aforementioned domains in GI-tract. In addition, these molecules often exhibit poor bioavailability when administered orally. As a result, parenteral administration is commonly preferred. In addition, shelf-life of these molecules in aqueous environments is poor, unless stored in low temperatures. Another approach is to bring these molecules to anhydrous form via lyophilization resulting in enhanced stability during storage. Proteins cannot most commonly be freeze dried by themselves so some kind of excipients are nearly always necessary. Disaccharides are commonly utilized excipients in freeze-dried formulations since they provide a rigid glassy matrix to maintain the native conformation of the protein domain. They also act as "sink"-agents, which basically mean that they can absorb some moisture from the environment and still help to protect the API itself to retain its activity and therefore offer a way to robust formulation. The aim of the present study was to investigate how four amorphous disaccharides (cellobiose, melibiose, sucrose and trehalose) behave when they are brought to different relative humidity levels. At first, solutions of each disaccharide were prepared, filled into scintillation vials and freeze dried. Initial information on how the moisture induced transformations take place, the lyophilized amorphous disaccharide cakes were placed in vacuum desiccators containing different relative humidity levels for defined period, after which selected analyzing methods were utilized to further examine the occurred transformations. Affinity to crystallization, water sorption of the disaccharides, the effect of moisture on glass transition and crystallization temperature were studied. In addition FT-IR microscopy was utilized to map the moisture distribution on a piece of lyophilized cake. Observations made during the experiments backed up the data mentioned in a previous study: melibiose and trehalose were shown to be superior over sucrose and cellobiose what comes to the ability to withstand elevated humidity and temperature, and to avoid crystallization with pharmaceutically relevant moisture contents. The difference was made evident with every utilized analyzing method. In addition, melibiose showed interesting anomalies during DVS runs, which were absent with other amorphous disaccharides. Particularly fascinating was the observation made with polarized light microscope, which revealed a possible small-scale crystallization that cannot be observed with XRPD. As a result, a suggestion can safely be made that a robust formulation is most likely obtained by utilizing either melibiose or trehalose as a stabilizing agent for biopharmaceutical freeze-dried formulations. On the other hand, more experiments should be conducted to obtain more accurate information on why these disaccharides have better tolerance for elevating humidities than others.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G(V, E) be a simple, undirected graph where V is the set of vertices and E is the set of edges. A b-dimensional cube is a Cartesian product l(1) x l(2) x ... x l(b), where each l(i) is a closed interval of unit length on the real line. The cub/city of G, denoted by cub(G), is the minimum positive integer b such that the vertices in G can be mapped to axis parallel b-dimensional cubes in such a way that two vertices are adjacent in G if and only if their assigned cubes intersect. An interval graph is a graph that can be represented as the intersection of intervals on the real line-i.e. the vertices of an interval graph can be mapped to intervals on the real line such that two vertices are adjacent if and only if their corresponding intervals overlap. Suppose S(m) denotes a star graph on m+1 nodes. We define claw number psi(G) of the graph to be the largest positive integer m such that S(m) is an induced subgraph of G. It can be easily shown that the cubicity of any graph is at least log(2) psi(G)]. In this article, we show that for an interval graph G log(2) psi(G)-]<= cub(G)<=log(2) psi(G)]+2. It is not clear whether the upper bound of log(2) psi(G)]+2 is tight: till now we are unable to find any interval graph with cub(G)> (log(2)psi(G)]. We also show that for an interval graph G, cub(G) <= log(2) alpha], where alpha is the independence number of G. Therefore, in the special case of psi(G)=alpha, cub(G) is exactly log(2) alpha(2)]. The concept of cubicity can be generalized by considering boxes instead of cubes. A b-dimensional box is a Cartesian product l(1) x l(2) x ... x l(b), where each I is a closed interval on the real line. The boxicity of a graph, denoted box(G), is the minimum k such that G is the intersection graph of k-dimensional boxes. It is clear that box(G)<= cub(G). From the above result, it follows that for any graph G, cub(G) <= box(G)log(2) alpha]. (C) 2010 Wiley Periodicals, Inc. J Graph Theory 65: 323-333, 2010

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A local algorithm with local horizon r is a distributed algorithm that runs in r synchronous communication rounds; here r is a constant that does not depend on the size of the network. As a consequence, the output of a node in a local algorithm only depends on the input within r hops from the node. We give tight bounds on the local horizon for a class of local algorithms for combinatorial problems on unit-disk graphs (UDGs). Most of our bounds are due to a refined analysis of existing approaches, while others are obtained by suggesting new algorithms. The algorithms we consider are based on network decompositions guided by a rectangular tiling of the plane. The algorithms are applied to matching, independent set, graph colouring, vertex cover, and dominating set. We also study local algorithms on quasi-UDGs, which are a popular generalisation of UDGs, aimed at more realistic modelling of communication between the network nodes. Analysing the local algorithms on quasi-UDGs allows one to assume that the nodes know their coordinates only approximately, up to an additive error. Despite the localisation error, the quality of the solution to problems on quasi-UDGs remains the same as for the case of UDGs with perfect location awareness. We analyse the increase in the local horizon that comes along with moving from UDGs to quasi-UDGs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Baltic Sea is one of the most eutrophic marine areas in the world. The role of nitrogen as a eutrophicating nutrient in the Baltic Sea has remained controversial, due to lack of understanding of nitrogen cycling in the area. We investigated the seasonal variation in sediment nitrification, denitrification, anaerobic ammonium oxidation (anammox), and dissimilatory nitrate reduction to ammonium (DNRA) at two coastal sites in the Gulf of Finland. In addition to the in situ rates, we assessed the potential for these processes in different seasons. The nitrification and nitrogen removal processes were maximal during the warm summer months, when the sediment organic content was highest. In colder seasons, the in situ rates of the nitrification and nitrate reduction processes decreased, but the potential for nitrification remained equal to or higher than that during the warm months. The denitrification and nitrification rates were usually higher in the accumulation basin, where the organic content of the sediment was higher, but the transportation area, despite lower denitrification rates and potential, typically had higher potential for nitrification than the accumulation basin. Anammox and DNRA were not significant nitrate sinks in any of the seasons sampled. The results also show that the denitrification rates in the coastal Gulf of Finland sediment have decreased, and that benthic denitrification might be a less important sink for fixed nitrogen than previously assumed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Any pair of non-adjacent vertices forms a non-edge in a graph. Contraction of a non-edge merges two non-adjacent vertices into a single vertex such that the edges incident on the non-adjacent vertices are now incident on the merged vertex. In this paper, we consider simple connected graphs, hence parallel edges are removed after contraction. The minimum number of nodes whose removal disconnects the graph is the connectivity of the graph. We say a graph is k-connected, if its connectivity is k. A non-edge in a k-connected graph is contractible if its contraction does not result in a graph of lower connectivity. Otherwise the non-edge is non-contractible. We focus our study on non-contractible non-edges in 2-connected graphs. We show that cycles are the only 2-connected graphs in which every non-edge is non-contractible. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of energy harvesting (EH) nodes as cooperative relays is a promising and emerging solution in wireless systems such as wireless sensor networks. It harnesses the spatial diversity of a multi-relay network and addresses the vexing problem of a relay's batteries getting drained in forwarding information to the destination. We consider a cooperative system in which EH nodes volunteer to serve as amplify-and-forward relays whenever they have sufficient energy for transmission. For a general class of stationary and ergodic EH processes, we introduce the notion of energy constrained and energy unconstrained relays and analytically characterize the symbol error rate of the system. Further insight is gained by an asymptotic analysis that considers the cases where the signal-to-noise-ratio or the number of relays is large. Our analysis quantifies how the energy usage at an EH relay and, consequently, its availability for relaying, depends not only on the relay's energy harvesting process, but also on its transmit power setting and the other relays in the system. The optimal static transmit power setting at the EH relays is also determined. Altogether, our results demonstrate how a system that uses EH relays differs in significant ways from one that uses conventional cooperative relays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work studies decision problems from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-colouring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires Ω(log n) bits per node. In this work we classify graph problems according to their local proof complexity, i.e., how many bits per node are needed in a locally checkable proof. We establish tight or near-tight results for classical graph properties such as the chromatic number. We show that the proof complexities form a natural hierarchy of complexity classes: for many classical graph problems, the proof complexity is either 0, Θ(1), Θ(log n), or poly(n) bits per node. Among the most difficult graph properties are symmetric graphs, which require Ω(n2) bits per node, and non-3-colourable graphs, which require Ω(n2/log n) bits per node—any pure graph property admits a trivial proof of size O(n2).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An algorithm to generate a minimal spanning tree is presented when the nodes with their coordinates in some m-dimensional Euclidean space and the corresponding metric are given. This algorithm is tested on manually generated data sets. The worst case time complexity of this algorithm is O(n log2n) for a collection of n data samples.