313 resultados para large delay
Resumo:
Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body that undergoes such undulatory motions. In the angulliform mode, or the eel type, the entire body undergoes undulatory motions in the form of a travelling wave that goes from head to tail, while in the other extreme case, the thunniform mode, only the rear tail (caudal fin) undergoes lateral oscillations. The thunniform mode of swimming is essentially based on the lift force generated by the airfoil like crosssection of the fish tail as it moves laterally through the water, while the anguilliform mode may be understood using the “reactive theory” of Lighthill. In pulsed jet propulsion, adopted by squids and salps, there are two components to the thrust; the first due to the familiar ejection of momentum and the other due to an over-pressure at the exit plane caused by the unsteadiness of the jet. The flow immediately downstream of the body in all three modes consists of vortex rings; the differentiating point being the vastly different orientations of the vortex rings. However, since all the bodies are self-propelling, the thrust force must be equal to the drag force (at steady speed), implying no net force on the body, and hence the wake or flow downstream must be momentumless. For such bodies, where there is no net force, it is difficult to directly define a propulsion efficiency, although it is possible to use some other very different measures like “cost of transportation” to broadly judge performance.
Resumo:
This paper presents a method for minimizing the sum of the square of voltage deviations by a least-square minimization technique, and thus improving the voltage profile in a given system by adjusting control variables, such as tap position of transformers, reactive power injection of VAR sources and generator excitations. The control variables and dependent variables are related by a matrix J whose elements are computed as the sensitivity matrix. Linear programming is used to calculate voltage increments that minimize transmission losses. The active and reactive power optimization sub-problems are solved separately taking advantage of the loose coupling between the two problems. The proposed algorithm is applied to IEEE 14-and 30-bus systems and numerical results are presented. The method is computationally fast and promises to be suitable for implementation in real-time dispatch centres.
Resumo:
Present day power systems are growing in size and complexity of operation with inter connections to neighboring systems, introduction of large generating units, EHV 400/765 kV AC transmission systems, HVDC systems and more sophisticated control devices such as FACTS. For planning and operational studies, it requires suitable modeling of all components in the power system, as the number of HVDC systems and FACTS devices of different type are incorporated in the system. This paper presents reactive power optimization with three objectives to minimize the sum of the squares of the voltage deviations (ve) of the load buses, minimization of sum of squares of voltage stability L-indices of load buses (¿L2), and also the system real power loss (Ploss) minimization. The proposed methods have been tested on typical sample system. Results for Indian 96-bus equivalent system including HVDC terminal and UPFC under normal and contingency conditions are presented.
Resumo:
We consider a small extent sensor network for event detection, in which nodes periodically take samples and then contend over a random access network to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center for processing the measurements. The Bayesian setting, is assumed, that is, the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is network-oblivious and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is network-aware and processes measurements as they arrive, but in a time-causal order. In this case, the decision statistic depends on the network delays, whereas in the network-oblivious case, the decision statistic does not. This yields a Bayesian change-detection problem with a trade-off between the random network delay and the decision delay that is, a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network-oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network-aware case, the optimal stopping problem is analyzed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient decision statistic is the network state and the posterior probability of change having occurred, given the measurements received and the state of the network. The optimal regimes are studied using simulation.
Resumo:
By employing a thermal oxidation strategy, we have grown large area porous Cu2O from Cu foil. CuO nanorods are grown by heating Cu which were in turn heated in an argon atmosphere to obtain a porous Cu2O layer. The porous Cu2O layer is superhydrophobic and exhibits red luminescence. In contrast, Cu2O obtained by direct heating, is hydrophobic and exhibits yellow luminescence. Two more luminescence bands are observed in addition to red and yellow luminescence, corresponding to the recombination of free and bound excitons. Over all, the porous Cu2O obtained from Cu via CuO nanorods, can serve as a superhydrophobic luminescence/phosphor material.
Resumo:
An all-digital technique is proposed for generating an accurate delay irrespective of the inaccuracies of a controllable delay line. A subsampling technique-based delay measurement unit (DMU) capable of measuring delays accurately for the full period range is used as the feedback element to build accurate fractional period delays based on input digital control bits. The proposed delay generation system periodically measures and corrects the error and maintains it at the minimum value without requiring any special calibration phase. Up to 40x improvement in accuracy is demonstrated for a commercial programmable delay generator chip. The time-precision trade-off feature of the DMU is utilized to reduce the locking time. Loop dynamics are adjusted to stabilize the delay after the minimum error is achieved, thus avoiding additional jitter. Measurement results from a high-end oscilloscope also validate the effectiveness of the proposed system in improving accuracy.
Resumo:
In the context of the standard model with a fourth generation, we explore the allowed mass spectra in the fourth-generation quark and lepton sectors as functions of the Higgs mass. Using the constraints from unitarity and oblique parameters, we show that a heavy Higgs allows large mass splittings in these sectors, opening up new decay channels involving W emission. Assuming that the hints for a light Higgs do not yet constitute an evidence, we work in a scenario where a heavy Higgs is viable. A Higgs heavier than similar to 800 GeV would in fact necessitate either a heavy quark decay channel t' -> b'W/b' -> t'W or a heavy lepton decay channel tau' -> nu'W as long as the mixing between the third and fourth generations is small. This mixing tends to suppress the mass splittings and hence the W-emission channels. The possibility of the W-emission channel could substantially change the search strategies of fourth-generation fermions at the LHC and impact the currently reported mass limits.
Resumo:
In the tree cricket Oecanthus henryi, females are attracted by male calls and can choose between males. To make a case for female choice based on male calls, it is necessary to examine male call variation in the field and identify repeatable call features that are reliable indicators of male size or symmetry. Female preference for these reliable call features and the underlying assumption behind this choice, female preference for larger males, also need to be examined. We found that females did prefer larger males during mating, as revealed by the longer mating durations and longer spermatophore retention times. We then examined the correlation between acoustic and morphological features and the repeatability of male calls in the field across two temporal scales, within and across nights. We found that carrier frequency was a reliable indicator of male size, with larger males calling at lower frequencies at a given temperature. Simultaneous playback of male calls differing in frequency, spanning the entire range of natural variation at a given temperature, revealed a lack of female preference for low carrier frequencies. The contrasting results between the phonotaxis and mating experiments may be because females are incapable of discriminating small differences in frequency or because the change in call carrier frequency with temperature renders this cue unreliable in tree crickets. (C) 2012 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.
Resumo:
Computational grids with multiple batch systems (batch grids) can be powerful infrastructures for executing long-running multi-component parallel applications. In this paper, we evaluate the potential improvements in throughput of long-running multi-component applications when the different components of the applications are executed on multiple batch systems of batch grids. We compare the multiple batch executions with executions of the components on a single batch system without increasing the number of processors used for executions. We perform our analysis with a foremost long-running multi-component application for climate modeling, the Community Climate System Model (CCSM). We have built a robust simulator that models the characteristics of both the multi-component application and the batch systems. By conducting large number of simulations with different workload characteristics and queuing policies of the systems, processor allocations to components of the application, distributions of the components to the batch systems and inter-cluster bandwidths, we show that multiple batch executions lead to 55% average increase in throughput over single batch executions for long-running CCSM. We also conducted real experiments with a practical middleware infrastructure and showed that multi-site executions lead to effective utilization of batch systems for executions of CCSM and give higher simulation throughput than single-site executions. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
We report on the large scale synthesis of millimetre long buckled multiwalled carbon nanotubes by one-step pyrolysis. Current carrying capability of a highly buckled region is shown to be more as compared to a less buckled region.
Resumo:
In this paper, we employ message passing algorithms over graphical models to jointly detect and decode symbols transmitted over large multiple-input multiple-output (MIMO) channels with low density parity check (LDPC) coded bits. We adopt a factor graph based technique to integrate the detection and decoding operations. A Gaussian approximation of spatial interference is used for detection. This serves as a low complexity joint detection/decoding approach for large dimensional MIMO systems coded with LDPC codes of large block lengths. This joint processing achieves significantly better performance than the individual detection and decoding scheme.