989 resultados para Optimal formulation
Resumo:
Marine sponge cell culture is a potential route for the sustainable production of sponge-derived bioproducts. Development of a basal culture medium is a prerequisite for the attachment, spreading, and growth of sponge cells in vitro. With the limited knowledge available on nutrient requirements for sponge cells, a series of statistical experimental designs has been employed to screen and optimize the critical nutrient components including inorganic salts (ferric ion, zinc ion, silicate, and NaCl), amino acids (glycine, glutamine, and aspartic acid), sugars (glucose, sorbitol, and sodium pyruvate), vitamin C, and mammalian cell medium (DMEM and RPMI 1640) using MTT assay in 96-well plates. The marine sponge Hymeniacidon perleve was used as a model system. Plackett-Burman design was used for the initial screening, which identified the significant factors of ferric ion, NaCl, and vitamin C. These three factors were selected for further optimization by Uniform Design and Response Surface Methodology (RSM), respectively. A basal medium was finally established, which supported an over 100% increase in viability of sponge cells.
Resumo:
Fluctuating light intensity had a more significant impact on growth of gametophytes of transgenic Laminaria japonica in a 2500 ml bubble-column bioreactor than constant light intensity. A fluctuating light intensity between 10 and 110 mu E m(-2) s(-1), with a photoperiod of 14 h:10 h light:dark, was the best regime for growth giving 1430 mg biomass l(-1).
Resumo:
In the present study, a method based on transmission-line mode for a porous electrode was used to measure the ionic resistance of the anode catalyst layer under in situ fuel cell operation condition. The influence of Nafion content and catalyst loading in the anode catalyst layer on the methanol electro-oxidation and direct methanol fuel cell (DMFC) performance based on unsupported Pt-Ru black was investigated by using the AC impedance method. The optimal Nafion content was found to be 15 wt% at 75 degrees C. The optimal Pt-Ru loading is related to the operating temperature, for example, about 2.0 mg/cm(2) for 75-90 degrees C, 3.0 mg/cm2 for 50 degrees C. Over these values, the cell performance decreased due to the increases in ohmic and mass transfer resistances. It was found that the peak power density obtained was 217 mW/cm(2) with optimal catalyst and Nafion loading at 75 degrees C using oxygen. (c) 2005 International Association for Hydrogen Energy. Published by Elsevier Ltd. All rights reserved.
Resumo:
Gough, John; Belavkin, V.P.; Smolianov, O.G., (2005) 'Hamilton?Jacobi?Bellman equations for quantum optimal feedback control', Journal of Optics B: Quantum and Semiclassical Optics 7 pp.S237-S244 RAE2008
Resumo:
Concentrating solar power is an important way of providing renewable energy. Model simulation approaches play a fundamental role in the development of this technology and, for this, an accurately validation of the models is crucial. This work presents the validation of the heat loss model of the absorber tube of a parabolic trough plant by comparing the model heat loss estimates with real measurements in a specialized testing laboratory. The study focuses on the implementation in the model of a physical-meaningful and widely valid formulation of the absorber total emissivity depending on the surface’s temperature. For this purpose, the spectral emissivity of several absorber’s samples are measured and, with these data, the absorber total emissivity curve is obtained according to Planck function. This physical-meaningful formulation is used as input parameter in the heat loss model and a successful validation of the model is performed. Since measuring the spectral emissivity of the absorber surface may be complex and it is sample-destructive, a new methodology for the absorber’s emissivity characterization is proposed. This methodology provides an estimation of the absorber total emissivity, retaining its physical meaning and widely valid formulation according to Planck function with no need for direct spectral measurements. This proposed method is also successfully validated and the results are shown in the present paper.
Resumo:
Resource Allocation Problems (RAPs) are concerned with the optimal allocation of resources to tasks. Problems in fields such as search theory, statistics, finance, economics, logistics, sensor & wireless networks fit this formulation. In literature, several centralized/synchronous algorithms have been proposed including recently proposed auction algorithm, RAP Auction. Here we present asynchronous implementation of RAP Auction for distributed RAPs.
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.
Resumo:
In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.
Resumo:
Hidden State Shape Models (HSSMs) [2], a variant of Hidden Markov Models (HMMs) [9], were proposed to detect shape classes of variable structure in cluttered images. In this paper, we formulate a probabilistic framework for HSSMs which provides two major improvements in comparison to the previous method [2]. First, while the method in [2] required the scale of the object to be passed as an input, the method proposed here estimates the scale of the object automatically. This is achieved by introducing a new term for the observation probability that is based on a object-clutter feature model. Second, a segmental HMM [6, 8] is applied to model the "duration probability" of each HMM state, which is learned from the shape statistics in a training set and helps obtain meaningful registration results. Using a segmental HMM provides a principled way to model dependencies between the scales of different parts of the object. In object localization experiments on a dataset of real hand images, the proposed method significantly outperforms the method of [2], reducing the incorrect localization rate from 40% to 15%. The improvement in accuracy becomes more significant if we consider that the method proposed here is scale-independent, whereas the method of [2] takes as input the scale of the object we want to localize.
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
This paper demonstrates an optimal control solution to change of machine set-up scheduling based on dynamic programming average cost per stage value iteration as set forth by Cararnanis et. al. [2] for the 2D case. The difficulty with the optimal approach lies in the explosive computational growth of the resulting solution. A method of reducing the computational complexity is developed using ideas from biology and neural networks. A real time controller is described that uses a linear-log representation of state space with neural networks employed to fit cost surfaces.
Resumo:
Genetic Algorithms (GAs) make use of an internal representation of a given system in order to perform optimization functions. The actual structural layout of this representation, called a genome, has a crucial impact on the outcome of the optimization process. The purpose of this paper is to study the effects of different internal representations in a GA, which generates neural networks. A second GA was used to optimize the genome structure. This structure produces an optimized system within a shorter time interval.
Resumo:
In this PhD study, mathematical modelling and optimisation of granola production has been carried out. Granola is an aggregated food product used in breakfast cereals and cereal bars. It is a baked crispy food product typically incorporating oats, other cereals and nuts bound together with a binder, such as honey, water and oil, to form a structured unit aggregate. In this work, the design and operation of two parallel processes to produce aggregate granola products were incorporated: i) a high shear mixing granulation stage (in a designated granulator) followed by drying/toasting in an oven. ii) a continuous fluidised bed followed by drying/toasting in an oven. In addition, the particle breakage of granola during pneumatic conveying produced by both a high shear granulator (HSG) and fluidised bed granulator (FBG) process were examined. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. It was observed that the least amount of breakage occurred in the straight pipe while the most breakage occurred at 90° bend pipe. Moreover, lower levels of breakage were observed in two 45° bend pipe than the 90° bend vi pipe configuration. In general, increasing the impact angle increases the degree of breakage. Additionally for the granules produced in the HSG, those produced at 300 rpm have the lowest breakage rates while the granules produced at 150 rpm have the highest breakage rates. This effect clearly the importance of shear history (during granule production) on breakage rates during subsequent processing. In terms of the FBG there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. A population balance model was developed to analyse the particle breakage occurring during pneumatic conveying. The population balance equations that govern this breakage process are solved using discretization. The Markov chain method was used for the solution of PBEs for this process. This study found that increasing the air velocity (by increasing the air pressure to the rig), results in increased breakage among granola aggregates. Furthermore, the analysis carried out in this work provides that a greater degree of breakage of granola aggregates occur in line with an increase in bend angle.
Resumo:
The performance of an RF output matching network is dependent on integrity of the ground connection. If this connection is compromised in anyway, additional parasitic elements may occur that can degrade performance and yield unreliable results. Traditionally, designers measure Constant Wave (CW) power to determine that the RF chain is performing optimally, the device is properly matched and by implication grounded. It is shown that there are situations where modulation quality can be compromised due to poor grounding that is not apparent using CW power measurements alone. The consequence of this is reduced throughput, range and reliability. Measurements are presented on a Tyndall Mote using a CC2420 RFIC todemonstrate how poor solder contact between the ground contacts and the ground layer of the PCB can lead tothe degradation of modulated performance. Detailed evaluation that required the development of a new measurement definition for 802.15.4 and analysis is presented to show how waveform quality is affected while the modulated output power remains within acceptable limits.
Resumo:
Cream liqueurs manufactured by a one-step process, where alcohol was added before homogenisation, were more stable than those processed by a two -step process which involved addition of alcohol after homogenisation. Using the one-step process, it was possible to produce creaming-stable liqueurs by using one pass through a homogeniser (27.6 MPa) equipped with "liquid whirl" valves. Test procedures to characterise cream liqueurs and to predict shelf life were studied in detail. A turbidity test proved simple, rapid and sensitive for characterising particle size and homogenisation efficiency. Prediction of age thickening/gelation in cream liqueurs during incubation at 45 °C depended on the age of the sample when incubated. Samples that gelled at 45 °C may not do so at ambient temperature. Commercial cream liqueurs were similar in gross chemical composition, and unlike experimentally produced liqueurs, these did not exhibit either age-gelation at ambient or elevated temperatures. Solutions of commercial sodium caseinates from different sources varied in their calcium sensitivity. When incorporated into cream liqueurs, caseinates influenced the rate of viscosity increase, coalescence and, possibly, gelation during incubated storage. Mild heat and alcohol treatment modified the properties of caseinate used to stabilise non-alcoholic emulsions, while the presence of alcohol in emulsions was important in preventing clustering of globules. The response to added trisodium citrate varied. In many cases, addition of the recommended level (0.18%) did not prevent gelation. Addition of small amounts of NaOH with 0.18 % trisodium citrate before homogenisation was beneficial. The stage at which citrate was added during processing was critical to the degree of viscosity increase (as opposed to gelation) in the product during 45 °C incubation. The component responsible for age-gelation was present in the milk-solids non fat portion of the cream and variations in the creams used were important in the age-gelation phenomenon Results indicated that, in addition to possibly Ca++, the micellar casein portion of serum may play a role in gelation. The role of the low molecular weight surfactants, sodium stearoyl lactylate and monodiglycerides in preventing gelation, was influenced by the presence of trisodium citrate. Clustering of fat globules and age-gelation were inhibited when 0.18 % citrate was included. Inclusion of sodium stearoyl lactylate, but not monodiglycerides, reduced the extent of viscosity increase at 45 °C in citrate containing liqueurs.