987 resultados para OPTIMAL FAT LOADS
Resumo:
Small failures should only disrupt a small part of a network. One way to do this is by marking the surrounding area as untrustworthy --- circumscribing the failure. This can be done with a distributed algorithm using hierarchical clustering and neighbor relations, and the resulting circumscription is near-optimal for convex failures.
Resumo:
We give a one-pass, O~(m^{1-2/k})-space algorithm for estimating the k-th frequency moment of a data stream for any real k>2. Together with known lower bounds, this resolves the main problem left open by Alon, Matias, Szegedy, STOC'96. Our algorithm enables deletions as well as insertions of stream elements.
Resumo:
Fluctuating light intensity had a more significant impact on growth of gametophytes of transgenic Laminaria japonica in a 2500 ml bubble-column bioreactor than constant light intensity. A fluctuating light intensity between 10 and 110 mu E m(-2) s(-1), with a photoperiod of 14 h:10 h light:dark, was the best regime for growth giving 1430 mg biomass l(-1).
Resumo:
In the present study, a method based on transmission-line mode for a porous electrode was used to measure the ionic resistance of the anode catalyst layer under in situ fuel cell operation condition. The influence of Nafion content and catalyst loading in the anode catalyst layer on the methanol electro-oxidation and direct methanol fuel cell (DMFC) performance based on unsupported Pt-Ru black was investigated by using the AC impedance method. The optimal Nafion content was found to be 15 wt% at 75 degrees C. The optimal Pt-Ru loading is related to the operating temperature, for example, about 2.0 mg/cm(2) for 75-90 degrees C, 3.0 mg/cm2 for 50 degrees C. Over these values, the cell performance decreased due to the increases in ohmic and mass transfer resistances. It was found that the peak power density obtained was 217 mW/cm(2) with optimal catalyst and Nafion loading at 75 degrees C using oxygen. (c) 2005 International Association for Hydrogen Energy. Published by Elsevier Ltd. All rights reserved.
Resumo:
4.171 JCR (2013) Q1, 6/81 Sport sciences
Resumo:
Gough, John; Belavkin, V.P.; Smolianov, O.G., (2005) 'Hamilton?Jacobi?Bellman equations for quantum optimal feedback control', Journal of Optics B: Quantum and Semiclassical Optics 7 pp.S237-S244 RAE2008
Resumo:
We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.
Resumo:
Most real-time scheduling problems are known to be NP-complete. To enable accurate comparison between the schedules of heuristic algorithms and the optimal schedule, we introduce an omniscient oracle. This oracle provides schedules for periodic task sets with harmonic periods and variable resource requirements. Three different job value functions are described and implemented. Each corresponds to a different system goal. The oracle is used to examine the performance of different on-line schedulers under varying loads, including overload. We have compared the oracle against Rate Monotonic Scheduling, Statistical Rate Monotonic Scheduling, and Slack Stealing Job Admission Control Scheduling. Consistently, the oracle provides an upper bound on performance for the metric under consideration.
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.
Resumo:
Hidden State Shape Models (HSSMs) [2], a variant of Hidden Markov Models (HMMs) [9], were proposed to detect shape classes of variable structure in cluttered images. In this paper, we formulate a probabilistic framework for HSSMs which provides two major improvements in comparison to the previous method [2]. First, while the method in [2] required the scale of the object to be passed as an input, the method proposed here estimates the scale of the object automatically. This is achieved by introducing a new term for the observation probability that is based on a object-clutter feature model. Second, a segmental HMM [6, 8] is applied to model the "duration probability" of each HMM state, which is learned from the shape statistics in a training set and helps obtain meaningful registration results. Using a segmental HMM provides a principled way to model dependencies between the scales of different parts of the object. In object localization experiments on a dataset of real hand images, the proposed method significantly outperforms the method of [2], reducing the incorrect localization rate from 40% to 15%. The improvement in accuracy becomes more significant if we consider that the method proposed here is scale-independent, whereas the method of [2] takes as input the scale of the object we want to localize.
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
This paper demonstrates an optimal control solution to change of machine set-up scheduling based on dynamic programming average cost per stage value iteration as set forth by Cararnanis et. al. [2] for the 2D case. The difficulty with the optimal approach lies in the explosive computational growth of the resulting solution. A method of reducing the computational complexity is developed using ideas from biology and neural networks. A real time controller is described that uses a linear-log representation of state space with neural networks employed to fit cost surfaces.
Resumo:
Genetic Algorithms (GAs) make use of an internal representation of a given system in order to perform optimization functions. The actual structural layout of this representation, called a genome, has a crucial impact on the outcome of the optimization process. The purpose of this paper is to study the effects of different internal representations in a GA, which generates neural networks. A second GA was used to optimize the genome structure. This structure produces an optimized system within a shorter time interval.
Experimental quantification and modelling of attrition of infant formulae during pneumatic conveying
Resumo:
Infant formula is often produced as an agglomerated powder using a spray drying process. Pneumatic conveying is commonly used for transporting this product within a manufacturing plant. The transient mechanical loads imposed by this process cause some of the agglomerates to disintegrate, which has implications for key quality characteristics of the formula including bulk density and wettability. This thesis used both experimental and modelling approaches to investigate this breakage during conveying. One set of conveying trials had the objective of establishing relationships between the geometry and operating conditions of the conveying system and the resulting changes in bulk properties of the infant formula upon conveying. A modular stainless steel pneumatic conveying rig was constructed for these trials. The mode of conveying and air velocity had a statistically-significant effect on bulk density at a 95% level, while mode of conveying was the only factor which significantly influenced D[4,3] or wettability. A separate set of conveying experiments investigated the effect of infant formula composition, rather than the pneumatic conveying parameters, and also assessed the relationships between the mechanical responses of individual agglomerates of four infant formulae and their compositions. The bulk densities before conveying, and the forces and strains at failure of individual agglomerates, were related to the protein content. The force at failure and stiffness of individual agglomerates were strongly correlated, and generally increased with increasing protein to fat ratio while the strain at failure decreased. Two models of breakage were developed at different scales; the first was a detailed discrete element model of a single agglomerate. This was calibrated using a novel approach based on Taguchi methods which was shown to have considerable advantages over basic parameter studies which are widely used. The data obtained using this model compared well to experimental results for quasi-static uniaxial compression of individual agglomerates. The model also gave adequate results for dynamic loading simulations. A probabilistic model of pneumatic conveying was also developed; this was suitable for predicting breakage in large populations of agglomerates and was highly versatile: parts of the model could easily be substituted by the researcher according to their specific requirements.
Resumo:
The performance of an RF output matching network is dependent on integrity of the ground connection. If this connection is compromised in anyway, additional parasitic elements may occur that can degrade performance and yield unreliable results. Traditionally, designers measure Constant Wave (CW) power to determine that the RF chain is performing optimally, the device is properly matched and by implication grounded. It is shown that there are situations where modulation quality can be compromised due to poor grounding that is not apparent using CW power measurements alone. The consequence of this is reduced throughput, range and reliability. Measurements are presented on a Tyndall Mote using a CC2420 RFIC todemonstrate how poor solder contact between the ground contacts and the ground layer of the PCB can lead tothe degradation of modulated performance. Detailed evaluation that required the development of a new measurement definition for 802.15.4 and analysis is presented to show how waveform quality is affected while the modulated output power remains within acceptable limits.