215 resultados para Saturated throughput


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The behaviour of saturated soils undergoing consolidation is very complex, It may not follow Terzaghi's theory over the entire consolidation process, Different soils may behave in such a way as to fit into Terzaghi's theory over some specific stages of the consolidation process (percentage of consolidation), This may be one of the reasons for the difficulties faced by the existing curve-fitting procedures in obtaining the coefficient of consolidation, c(v). It has been shown that the slope of the initial linear portion of the theoretical log U-log T curve is constant over a wider range of degree of consolidation, U, when compared with the other methods in use, This initial well-defined straight line in the log U-log T plot intersects the U = 100% line at T = pi/4, which corresponds to U = 88.3%, The proposed log delta-log t method is based on this observation, which gives the value of c(v) through simple graphical construction, In the proposed method, which is more versatile, identification of the characteristic straight lines is very clear; the intersection of these lines is more precise and the method does not depend upon the initial compression for the determination of c(v).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of delayed coefficient adaptation in the least mean square (LMS) algorithm has enabled the design of pipelined architectures for real-time transversal adaptive filtering. However, the convergence speed of this delayed LMS (DLMS) algorithm, when compared with that of the standard LMS algorithm, is degraded and worsens with increase in the adaptation delay. Existing pipelined DLMS architectures have large adaptation delay and hence degraded convergence speed. We in this paper, first present a pipelined DLMS architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph representation of the DLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates, limited only by the delay of a full adder and a 2-to-1 multiplexer. In the second part of this paper, we extend the synthesis methodology described in the first part, to synthesize pipelined DLMS architectures whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the DLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. (C) 1999 Elsevier Science B.V. All rights resented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of wireless channel allocation to multiple users. A slot is given to a user with a highest metric (e.g., channel gain) in that slot. The scheduler may not know the channel states of all the users at the beginning of each slot. In this scenario opportunistic splitting is an attractive solution. However this algorithm requires that the metrics of different users form independent, identically distributed (iid) sequences with same distribution and that their distribution and number be known to the scheduler. This limits the usefulness of opportunistic splitting. In this paper we develop a parametric version of this algorithm. The optimal parameters of the algorithm are learnt online through a stochastic approximation scheme. Our algorithm does not require the metrics of different users to have the same distribution. The statistics of these metrics and the number of users can be unknown and also vary with time. Each metric sequence can be Markov. We prove the convergence of the algorithm and show its utility by scheduling the channel to maximize its throughput while satisfying some fairness and/or quality of service constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The half-duplex constraint, which mandates that a cooperative relay cannot transmit and receive simultaneously, considerably simplifies the demands made on the hardware and signal processing capabilities of a relay. However, the very inability of a relay to transmit and receive simultaneously leads to a potential under-utilization of time and bandwidth resources available to the system. We analyze the impact of the half-duplex constraint on the throughput of a cooperative relay system that uses rateless codes to harness spatial diversity and efficiently transmit information from a source to a destination. We derive closed-form expressions for the throughput of the system, and show that as the number of relays increases, the throughput approaches that of a system that uses more sophisticated full-duplex nodes. Thus, half-duplex nodes are well suited for cooperation using rateless codes despite the simplicity of both the cooperation protocol and the relays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the simplest IEEE 802.11 WLAN networks for which analytical models are available and seek to provide an experimental validation of these models. Our experiments include the following cases: (i) two nodes with saturated queues, sending fixed-length UDP packets to each other, and (ii) a TCP-controlled transfer between two nodes. Our experiments are based entirely on Aruba AP-70 access points operating under Linux. We report our observations on certain non-standard behavior of the devices. In cases where the devices adhere to the standards, we find that the results from the analytical models estimate the experimental data with a mean error of 3-5%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of scheduling a wireless channel among multiple users. A slot is given to a user with a highest metric (e.g., channel gain) in that slot. The scheduler may not know the channel states of all the users at the beginning of each slot. In this scenario opportunistic splitting is an attractive solution. However this algorithm requires that the metrics of different users form independent, identically distributed (iid) sequences with same distribution and that their distribution and number be known to the scheduler. This limits the usefulness of opportunistic splitting. In this paper we develop a parametric version of this algorithm. The optimal parameters of the algorithm are learnt online through a stochastic approximation scheme. Our algorithm does not require the metrics of different users to have the same distribution. The statistics of these metrics and the number of users can be unknown and also vary with time. We prove the convergence of the algorithm and show its utility by scheduling the channel to maximize its throughput while satisfying some fairness and/or quality of service constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We review the current status of various aspects of biopolymer translocation through nanopores and the challenges and opportunities it offers. Much of the interest generated by nanopores arises from their potential application to third-generation cheap and fast genome sequencing. Although the ultimate goal of single-nucleotide identification has not yet been reached, great advances have been made both from a fundamental and an applied point of view, particularly in controlling the translocation time, fabricating various kinds of synthetic pores or genetically engineering protein nanopores with tailored properties, and in devising methods (used separately or in combination) aimed at discriminating nucleotides based either on ionic or transverse electron currents, optical readout signatures, or on the capabilities of the cellular machinery. Recently, exciting new applications have emerged, for the detection of specific proteins and toxins (stochastic biosensors), and for the study of protein folding pathways and binding constants of protein-protein and protein-DNA complexes. The combined use of nanopores and advanced micromanipulation techniques involving optical/magnetic tweezers with high spatial resolution offers unique opportunities for improving the basic understanding of the physical behavior of biomolecules in confined geometries, with implications for the control of crucial biological processes such as protein import and protein denaturation. We highlight the key works in these areas along with future prospects. Finally, we review theoretical and simulation studies aimed at improving fundamental understanding of the complex microscopic mechanisms involved in the translocation process. Such understanding is a pre-requisite to fruitful application of nanopore technology in high-throughput devices for molecular biomedical diagnostics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanoclusters of bimetallic Pt-Ru are electrochemically deposited on conductive polymer, poly(3,4-ethylenedioxythiophene)(PEDOT), which is also electrochemically deposited on a carbon paper substrate. The bimetallic deposition is carried out in an acidic electrolyte consisting of chloroplatinic acid and ruthenium chloride at 0.0 V versus saturated calomel electrode (SCE) on PEDOT coated carbon paper. A thin layer PEDOT on a carbon paper substrate facilitates the formation of uniform, well-dispersed, nano clusters of Pt-Ru of mean diameter of 123 nm, which consist of nanosize particles. In the absence of PEDOT, the size of the clusters is about 251 nm, which are unevenly distributed on carbon paper substrate. Cyclic voltammetry studies suggest that peak currents of methanol oxidation are several times greater on PtRu-PEDOT electrode than on Pt-Ru electrode in the absence of PEDOT. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A distinctive feature of the Nhecolandia, a sub-region of the Pantanal wetland in Brazil, is the presence of both saline and freshwater lakes. Saline lakes used to be attributed to a past and phase during the Pleistocene. However, recent studies have shown that saline and fresh water lakes are linked by a continuous water table, indicating that saline water could come from a contemporary concentration process. This concentration process could also be responsible for the large chemical variability of the waters observed in the area. A regional water sampling has been conducted in surface and sub-surface water and the water table, and the results of the geochemical and statistical analysis are presented. Based on sodium contents, the concentration shows a 1: 4443 ratio. All the samples belong to the same chemical family and evolve in a sodic alkaline manner. Calcite or magnesian calcite precipitates very early in the process of concentration, probably followed by the precipitation of magnesian silicates. The most concentrated solutions remain under-saturated with respect to the sodium carbonate salt, even if this equilibrium is likely reached around the saline lakes. Apparently, significant amounts of sulfate and chloride are lost simultaneously from the solutions, and this cannot be explained solely by evaporative concentration. This could be attributed to the sorption on reduced minerals in a green sub-surface horizon in the "cordilhieira" areas. In the saline lakes, low potassium, phosphate, magnesium, and sulfate are attributed to algal blooms. Under the influence of evaporation, the concentration of solutions and associated chemical precipitations are identified as the main factors responsible for the geochemical variability in this environment (about 92 % of the variance). Therefore, the saline lakes of Nhecolandia have to be managed as landscape units in equilibrium with the present water flows and not inherited from a past and phase. In order to elaborate hydrochemical tracers for a quantitative estimation of water flows, three points have to be investigated more precisely: (1) the quantification of magnesium involved in the Mg-calcite precipitation; (2) the identification of the precise stoichiometry of the Mg-silicate; and (3) the verification of the loss of chloride and sulfate by sorption onto labile iron minerals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Because of frequent topology changes and node failures, providing quality of service routing in mobile ad hoc networks becomes a very critical issue. The quality of service can be provided by routing the data along multiple paths. Such selection of multiple paths helps to improve reliability and load balancing, reduce delay introduced due to route rediscovery in presence of path failures. There are basically two issues in such a multipath routing Firstly, the sender node needs to obtain the exact topology information. Since the nodes are continuously roaming, obtaining the exact topology information is a tough task. Here, we propose an algorithm which constructs highly accurate network topology with minimum overhead. The second issue is that the paths in the path set should offer best reliability and network throughput. This is achieved in two ways 1) by choice of a proper metric which is a function of residual power, traffic load on the node and in the surrounding medium 2) by allowing the reliable links to be shared between different paths.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organic nanoparticles consisting of single conjugated polymer chains were investigated as a function of degree of conjugation by means of single-molecule spectroscopy. The degree of conjugation was synthetically controlled. For highly conjugated chains, singlet excitons are efficiently funneled over nanometer distances to a small number of sites. In contrast, chains with less conjugation and a high number of saturated bonds do not exhibit energy funneling due to a highly disordered conformation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Copper(II) complexes Cu(satp)(L)] (1-3) of a Schiff base thiolate (salicylidene-2-aminothiophenol, H(2)satP) and phenanthroline bases (L), viz. 1,10-phenanthroline (phen in 1), dipyrido3,2-d:2',3'-f]quinoxaline (dpq in 2) and dipyrido3,2-a:2',3'-c]phenazine (dppz in 3), were prepared, characterized and their anaerobic DNA photocleavage activity and hypoxic photocytotoxicity studied. The redox active complexes show the Cu(II)-Cu(I) couple near -0.5 V for 1 and near 0.0 V vs. SCE (saturated calomel electrode) for 2 and 3. The one-electron paramagnetic complexes (similar to 1.85 mu(B)) are avid DNA binders giving K(b) values within 1.0 x 10(5) - 8.0 x 10(5) M(-1). Thermal melting and viscosity data along with molecular docking calculations suggest DNA groove and/or partial intercalative binding of the complexes. The complexes show anaerobic DNA cleavage activity in red light under argon via type-I pathway, while DNA photocleavage in air proceeds via hydroxyl radical pathway. The DFT (density functional theory) calculations reveal a thyil radical pathway for the anaerobic DNA photocleavage activity and suggest the possibility of generation of a transient copper(I) species due to bond breakage between the copper and sulfur to generate the thyil radical. An oxidation of the copper(I) species is likely by oxygen in an aerobic medium or by the buffer medium in an anaerobic condition. Complex 3 exhibits significant photocytotoxicity in HeLa cells (IC(50) = 8.3(+/- 1.0) mu M) in visible light, while showing lower dark toxicity (IC(50) = 17.2(+/- 1.0) mu M). A significant reduction in the dark toxicity is observed under hypoxic cellular conditions (IC(50) = 30.0(+/- 1.0) mu M in dark), while retaining its photocytotoxicity (IC(50) = 8.0(+/- 1.0) mu M). (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A systematic study on the variation of Mössbauer hyperfine parameters with grain size in nanocrystalline zinc ferrite is lacking. In the present study, nanocrystalline ZnFe2O4 ferrites with different grain sizes were prepared by ball-milling technique and characterised by X-ray, EDAX, magnetisation and Mössbauer studies. The grain size decreases with increasing milling time and lattice parameter is found to be slightly higher than the bulk value. Magnetisation at room temperature (RT) and at 77 K could not be saturated with a magnetic field of 7 kOe and the observed magnetisation at these temperatures can be explained on the basis of deviation of cation distribution from normal spinel structure. The Mössbauer spectra were recorded at different temperatures between RT and 16 K. The values of quadrupole splitting at RT are higher for the milled samples indicating the disordering of ZnFe2O4 on milling. The strength of the magnetic hyperfine interactions increases with grain size reduction and this can be explained on the basis of the distribution of Fe3+ ions at both tetrahedral and octahedral sites.