215 resultados para Saturated throughput


Relevância:

10.00% 10.00%

Publicador:

Resumo:

VHF nighttime scintillations, recorded during a high solar activity period at a meridian chain of stations covering a magnetic latitude belt of 3°–21°N (420 km subionospheric points) are analyzed to investigate the influence of equatorial spread F irregularities on the occurrence of scintillation at latitudes away from the equator. Observations show that saturated amplitude scintillations start abruptly about one and a half hours after ground sunset and their onset is almost simultaneous at stations whose subionospheric points are within 12°N latitude of the magnetic equator, but is delayed at a station whose subionospheric point is at 21°N magnetic latitude by 15 min to 4 hours. In addition, the occurrence of postsunset scintillations at all the stations is found to be conditional on their prior occurrence at the equatorial station. If no postsunset scintillation activity is seen at the equatorial station, no scintillations are seen at other stations also. The occurrence of scintillations is explained as caused by rising plasma bubbles and associated irregularities over the magnetic equator and the subsequent mapping of these irregularities down the magnetic field lines to the F region of higher latitudes through some instantaneous mechanism; and hence an equatorial control is established on the generation of postsunset scintillation-producing irregularities in the entire low-latitude belt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The presence of lipids has been demonstrated in mycobacteriophage 13. The total lipid was composed of 69% phospholipids and 31% neutral lipids. More than two-thirds of phospholipids present in the phage were synthesized in the host prior to infection. The fatty acid composition of the phage differed markedly from that of its host, both in chain length and the degree of saturation. The phage lipid was mostly composed of saturated fatty acids of which more than 50% were short chain fatty acids. Changes in growth temperatures reflected variations in fatty acid composition, characteristic of the phage, and which were distinctly different from those of the host. Electron microscopic observations revealed that the phage has a membranous bilayer structure. The presence of lipids may facilitate the phage-host interaction especially in lipid-rich organisms like mycobacteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brillouin scattering by one-phonon-two-magnon interacting excitations in ferromagnetic dielectrics is discussed. The basic light scattering mechanism is taken to be the modulation of the density-dependent optical dielectric polarizability of the medium by the dynamic strain field generated by the longitudinal acoustic (LA) phonons. The renormalization effects arising from the scattering of phonons by the two-magnon creation-annihilation processes have, however, been taken into account. Via these interactions, the Brillouin components corresponding to the two-magnon excitations are reflected indirectly in the spectrum of the phonon scattered light as line-broadening of the otherwise relatively sharp Brillouin doublet. The present mechanism is shown to be dominant in a clean saturated ferromagnetic dielectric with large magneto-strictive coupling constant, and with the magnetic ions in the orbitally quenched states. Following the linear response theory, an expression has been derived for the spectral density of the scattered light as a function of temperature, scattering angle, and the strength of the externally applied magnetic field. Some estimates are given for the line-width and line-shift of the Brillouin components for certain typical choice of parameters involved. The results are discussed in relation to some available calculations on the ultrasonic attenuation in ferromagnetic insulators at low temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogenation of someα, β-unsaturated carbonyl compounds using potassium pentacyanocobaltate (II), K3Co(CN)5, as a homogeneous catalyst has been investigated. Thus, hydrogenation of 1-carvone (I), mesityl oxide (4), 2-cyclohexenone (8) and benzalacetone (6) afforded the corresponding dihydrocompounds. Hydrogenation ofβ-ionone (10) afforded a mixture of theα, β-dihydrocompounds (14) and (15). In all these cases, it was observed that the reaction proceeded to completion only in the presence of added base. Hydrogenation of 5α-androst-l-en-17β-ol-3-one acetate (19) afforded the saturated compound, 5α-androst-17β-ol-3-one (20) in 60% yield. It was found that other steroid enones and dienones were not reduced by this catalyst system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The prognosis of patients with glioblastoma, the most malignant adult glial brain tumor, remains poor in spite of advances in treatment procedures, including surgical resection, irradiation and chemotherapy.Genetic heterogeneity of glioblastoma warrants extensive studies in order to gain a thorough understanding of the biology of this tumor. While there have been several studies of global transcript profiling of glioma with the identification of gene signatures for diagnosis and disease management, translation into clinics is yet to happen. Serum biomarkers have the potential to revolutionize the process of cancer diagnosis, grading, prognostication and treatment response monitoring. Besides having the advantage that serum can be obtained through a less invasive procedure, it contains molecules at an extraordinary dynamic range of ten orders of magnitude in terms of their concentrations. While the conventional methods, such as 2DE, have been in use for many years, the ability to identify the proteins through mass spectrometry techniques such as MALDI-TOF led to an explosion of interest in proteomics. Relatively new high-throughput proteomics methods such as SELDI-TOF and protein microarrays are expected to hasten the process of serum biomarker discovery. This review will highlight the recent advances in the proteomics platform in discovering serum biomarkers and the current status of glioma serum markers. We aim to provide the principles and potential of the latest proteomic approaches and their applications in the biomarker discovery process. Besides providing a comprehensive list of available serum biomarkers of glioma, we will also propose how these markers will revolutionize the clinical management of glioma patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The downlink scheduling problem in multi-queue multi-server systems under channel uncertainty is considered. Two policies that make allocations based on predicted channel states are proposed. The first is an extension of the well-known dynamic backpressure policy to the uncertain channel case. The second is a variant that improves delay performance under light loads. The stability region of the system is characterised and the first policy is argued to be throughput optimal. A recently proposed policy of Kar et al [1] has lesser complexity, but is shown to be throughput suboptimal. Further, simulations demonstrate better delay and backlog properties for both our policies at light loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Loop heat pipe is a passive two-phase heat transport device that is gaining importance as a part of spacecraft thermal control systems and also in applications (such as in avionic cooling and submarines). Hard fill of a loop heat pipe occurs when the compensation chamber is full of liquid. A theoretical study is undertaken to investigate the issues underlying the loop beat pipe hard-fill phenomenon. The results of the study suggest that the mass of charge and the presence of a bayonet have significant impact on the loop heat pipe operation. With a largern mass of charge, a loop heat pipe hard fills at a lower heat load. As the heat load increases, there is a steep rise in the loop heat pipe operating temperature. In a loop heat pipe with a saturated compensation chamber, and also in a hard-filled loop heat pipe without a bayonet, the temperature of the compensation chamber and that of the liquid core are nearly equal. When a loop heat pipe with a bayonet hard fills, the compensation chamber and the evaporator core temperatures are different.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend the modeling heuristic of (Harsha et al. 2006. In IEEE IWQoS 06, pp 178 - 187) to evaluate the performance of an IEEE 802.11e infrastructure network carrying packet telephone calls, streaming video sessions and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We identify the time boundaries of activities on the channel (called channel slot boundaries) and derive a Markov Renewal Process of the contending nodes on these epochs. This is achieved by the use of attempt probabilities of the contending nodes as those obtained from the saturation fixed point analysis of (Ramaiyan et al. 2005. In Proceedings ACM Sigmetrics, `05. Journal version accepted for publication in IEEE TON). Regenerative analysis on this MRP yields the desired steady state performance measures. We then use the MRP model to develop an effective bandwidth approach for obtaining a bound on the size of the buffer required at the video queue of the AP, such that the streaming video packet loss probability is kept to less than 1%. The results obtained match well with simulations using the network simulator, ns-2. We find that, with the default IEEE 802.11e EDCA parameters for access categories AC 1, AC 2 and AC 3, the voice call capacity decreases if even one streaming video session and one TCP file download are initiated by some wireless station. Subsequently, reducing the voice calls increases the video downlink stream throughput by 0.38 Mbps and file download capacity by 0.14 Mbps, for every voice call (for the 11 Mbps PHY). We find that a buffer size of 75KB is sufficient to ensure that the video packet loss probability at the QAP is within 1%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A piconet is basically a collection of slaves controlled by a master. A scatternet, on the other hand, is established by linking several piconets together in an ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a scheduling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in bluetooth piconets and scatternets. We propose a novel algorithm for scheduling slots to slaves for both piconets and scatternets using multi-layered parameterized policies. Our scheduling scheme works with real data and obtains an optimal feedback policy within prescribed parameterized classes of these by using an efficient two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. We show the convergence of our algorithm to an optimal multi-layered policy. We also propose novel polling schemes for intra- and inter-piconet scheduling that are seen to perform well. We present an extensive set of simulation results and performance comparisons with existing scheduling algorithms. Our results indicate that our proposed scheduling algorithm performs better overall on a wide range of experiments over the existing algorithms for both piconets (Das et al. in INFOCOM, pp. 591–600, 2001; Lapeyrie and Turletti in INFOCOM conference proceedings, San Francisco, US, 2003; Shreedhar and Varghese in SIGCOMM, pp. 231–242, 1995) and scatternets (Har-Shai et al. in OPNETWORK, 2002; Saha and Matsumot in AICT/ICIW, 2006; Tan and Guttag in The 27th annual IEEE conference on local computer networks(LCN). Tampa, 2002). Our studies also confirm that our proposed scheme achieves a high throughput and low packet delays with reasonable fairness among all the connections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Run-time interoperability between different applications based on H.264/AVC is an emerging need in networked infotainment, where media delivery must match the desired resolution and quality of the end terminals. In this paper, we describe the architecture and design of a polymorphic ASIC to support this. The H.264 decoding flow is partitioned into modules, such that the polymorphic ASIC meets the design goals of low-power, low-area, high flexibility, high throughput and fast interoperability between different profiles and levels of H.264. We demonstrate the idea with a multi-mode decoder that can decode baseline, main and high profile H.264 streams and can interoperate at run.time across these profiles. The decoder is capable of processing frame sizes of up to 1024 times 768 at 30 fps. The design synthesized with UMC 0.13 mum technology, occupies 250 k gates and runs at 100 MHz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies have shown that buffering packets in DRAM is a performance bottleneck. In order to understand the impediments in accessing the DRAM, we developed a detailed Petri net model of IP forwarding application on IXP2400 that models the different levels of the memory hierarchy. The cell based interface used to receive and transmit packets in a network processor leads to some small size DRAM accesses. Such narrow accesses to the DRAM expose the bank access latency, reducing the bandwidth that can be realized. With real traces up to 30% of the accesses are smaller than the cell size, resulting in 7.7% reduction in DRAM bandwidth. To overcome this problem, we propose buffering these small chunks of data in the on chip scratchpad memory. This scheme also exploits greater degree of parallelism between different levels of the memory hierarchy. Using real traces from the internet, we show that the transmit rate can be improved by an average of 21% over the base scheme without the use of additional hardware. Further, the impact of different traffic patterns on the network processor resources is studied. Under real traffic conditions, we show that the data bus which connects the off-chip packet buffer to the micro-engines, is the obstacle in achieving higher throughput.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bluetooth is an emerging standard in short range, low cost and low power wireless networks. MAC is a generic polling based protocol, where a central Bluetooth unit (master) determines channel access to all other nodes (slaves) in the network (piconet). An important problem in Bluetooth is the design of efficient scheduling protocols. This paper proposes a polling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in a Bluetooth Piconet. We present an extensive set of simulation results and performance comparisons with two important existing algorithms. Our results indicate that our proposed scheduling algorithm outperforms the Round Robin scheduling algorithm by more than 40% in all cases tried. Our study also confirms that our proposed policy achieves higher throughput and lower packet delays with reasonable fairness among all the connections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.