959 resultados para Feature scale simulation
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Sugarcane bagasse was pretreated with diluted sulfuric acid to obtain sugarcane bagasse hemicellulosic hydrolysate (SBHH). Experiments were conducted in laboratory and semi-pilot reactors to optimize the xylose recovery and to reduce the generation of sugar degradation products, as furfural and 5-hydroxy-methylfurfural (HMF). The hydrolysis scale-up procedure was based on the H-Factor, that combines temperature and residence time and employs the Arrhenius equation to model the sulfuric acid concentration (100 mg(acid)/g(dm)) and activation energy (109 kJ/mol). This procedure allowed the mathematical estimation of the results through simulation of the conditions prevailing in the reactors with different designs. The SBHH obtained from different reactors but under the same H-Factor of 5.45 +/- 0.15 reached similar xylose yield (approximately 74%) and low concentration of sugar degradation products, as furfural (0.082 g/L) and HMF (0.0071 g/L). Also, the highest lignin degradation products (phenolic compounds) were rho-coumarilic acid (0.15 g/L) followed by ferulic acid (0.12 g/L) and gallic acid (0.035 g/L). The highest concentration of ions referred to S (3433.6 mg/L), Fe (554.4 mg/L), K (103.9 mg/L), The H-Factor could be used without dramatically altering the xylose and HMF/furfural levels. Therefore, we could assume that H-Factor was directly useful in the scale-up of the hemicellulosic hydrolysate production. (C) 2009 Published by Elsevier Ltd.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
A methodology of identification and characterization of coherent structures mostly known as clusters is applied to hydrodynamic results of numerical simulation generated for the riser of a circulating fluidized bed. The numerical simulation is performed using the MICEFLOW code, which includes the two-fluids IIT`s hydrodynamic model B. The methodology for cluster characterization that is used is based in the determination of four characteristics, related to average life time, average volumetric fraction of solid, existing time fraction and frequency of occurrence. The identification of clusters is performed by applying a criterion related to the time average value of the volumetric solid fraction. A qualitative rather than quantitative analysis is performed mainly owing to the unavailability of operational data used in the considered experiments. Concerning qualitative analysis, the simulation results are in good agreement with literature. Some quantitative comparisons between predictions and experiment were also presented to emphasize the capability of the modeling procedure regarding the analysis of macroscopic scale coherent structures. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
In an open channel, the transition from super- to sub-critical flow is a flow singularity (the hydraulic jump) characterised by a sharp rise in free-surface elevation, strong turbulence and air entrainment in the roller. A key feature of the hydraulic jump flow is the strong free-surface aeration and air-water flow turbulence. In the present study, similar experiments were conducted with identical inflow Froude numbers Fr1 using a geometric scaling ratio of 2:1. The results of the Froude-similar experiments showed some drastic scale effects in the smaller hydraulic jumps in terms of void fraction, bubble count rate and bubble chord time distributions. Void fraction distributions implied comparatively greater detrainment at low Reynolds numbers yielding some lesser aeration of the jump roller. The dimensionless bubble count rates were significantly lower in the smaller channel, especially in the mixing layer. The bubble chord time distributions were quantitatively close in both channels, and they were not scaled according to a Froude similitude. Simply the hydraulic jump remains a fascinating two-phase flow motion that is still poorly understood.
Resumo:
A new method of poly-beta-hydroxybutyrate (PHB) extraction from recombinant E. coli is proposed, using homogenization and centrifugation coupled with sodium hypochlorite treatment. The size of PHB granules and cell debris in homogenates was characterised as a function of the number of homogenization passes. Simulation was used to develop the PHB and cell debris fractionation system, enabling numerical examination of the effects of repeated homogenization and centrifuge-feedrate variation. The simulation provided a good prediction of experimental performance. Sodium hypochlorite treatment was necessary to optimise PHB fractionation. A PHB recovery of 80% at a purity of 96.5% was obtained with the final optimised process. Protein and DNA contained in the resultant product were negligible. The developed process holds promise for significantly reducing the recovery cost associated with PHB manufacture.
Resumo:
A comprehensive probabilistic model for simulating dendrite morphology and investigating dendritic growth kinetics during solidification has been developed, based on a modified Cellular Automaton (mCA) for microscopic modeling of nucleation, growth of crystals and solute diffusion. The mCA model numerically calculated solute redistribution both in the solid and liquid phases, the curvature of dendrite tips and the growth anisotropy. This modeling takes account of thermal, curvature and solute diffusion effects. Therefore, it can simulate microstructure formation both on the scale of the dendrite tip length. This model was then applied for simulating dendritic solidification of an Al-7%Si alloy. Both directional and equiaxed dendritic growth has been performed to investigate the growth anisotropy and cooling rate on dendrite morphology. Furthermore, the competitive growth and selection of dendritic crystals have also investigated.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
A new ball mill scale-up procedure is developed which uses laboratory data to predict the performance of MI-scale ball mill circuits. This procedure contains two laboratory tests. These laboratory tests give the data for the determination of the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of the full-scale mill circuit. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw. A worked example shows how the new ball mill scale-up procedure is executed. This worked example uses laboratory data to predict the performance of a full-scale re-grind mill circuit. This circuit consists of a ball mill in closed circuit with hydrocyclones. The MI-scale ball mill has a diameter (inside liners) of 1.85m. The scale-up procedure shows that the full-scale circuit produces a product (hydrocyclone overflow) that has an 80% passing size of 80 mum. The circuit has a recirculating load of 173%. The calculated power draw of the full-scale mill is 92kW (C) 2001 Elsevier Science Ltd. All rights reserved.
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
There is considerable anecdotal evidence from industry that poor wetting and liquid distribution can lead to broad granule size distributions in mixer granulators. Current scale-up scenarios lead to poor liquid distribution and a wider product size distribution. There are two issues to consider when scaling up: the size and nature of the spray zone and the powder flow patterns as a function of granulator scale. Short, nucleation-only experiments in a 25L PMA Fielder mixer using lactose powder with water and HPC solutions demonstrated the existence of different nucleation regimes depending on the spray flux Psi(a)-from drop-controlled nucleation to caking. In the drop-controlled regime at low Psi(a) values. each drop forms a single nucleus and the nuclei distribution is controlled by the spray droplet size distribution. As Psi(a) increases, the distribution broadens rapidly as the droplets overlap and coalesce in the spray zone. The results are in excellent agreement with previous experiments and confirm that for drop-controlled nucleation. Psi(a) should be less than 0.1. Granulator flow studies showed that there are two powder flow regimes-bumping and roping. The powder flow goes through a transition from bumping to roping as impeller speed is increased. The roping regime gives good bed turn over and stable flow patterns. This regime is recommended for good liquid distribution and nucleation. Powder surface velocities as a function of impeller speed were measured using high-speed video equipment and MetaMorph image analysis software, Powder surface velocities were 0.2 to 1 ms(-1)-an order of magnitude lower than the impeller tip speed. Assuming geometrically similar granulators, impeller speed should be set to maintain constant Froude number during scale-up rather than constant tip speed to ensure operation in the roping regime. (C) 2002 Published by Elsevier Science B.V.
Resumo:
Predictions of flow patterns in a 600-mm scale model SAG mill made using four classes of discrete element method (DEM) models are compared to experimental photographs. The accuracy of the various models is assessed using quantitative data on shoulder, toe and vortex center positions taken from ensembles of both experimental and simulation results. These detailed comparisons reveal the strengths and weaknesses of the various models for simulating mills and allow the effect of different modelling assumptions to be quantitatively evaluated. In particular, very close agreement is demonstrated between the full 3D model (including the end wall effects) and the experiments. It is also demonstrated that the traditional two-dimensional circular particle DEM model under-predicts the shoulder, toe and vortex center positions and the power draw by around 10 degrees. The effect of particle shape and the dimensionality of the model are also assessed, with particle shape predominantly affecting the shoulder position while the dimensionality of the model affects mainly the toe position. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.