917 resultados para Statistical mixture-design optimization
Resumo:
Soil-rock mixture (S-RM) refers to one extremely uneven loose rock and soil materials system with certain stone content. Its formation has started since Quaternary and it is composed of block stone, fine grained soil and pore with certain project scale and high strength. S-RM has extensive distribution in nature, especially in southwest China where the geotectonic background is complicated, the fracture activity is developed and the geomorphological characteristics of high mountain and steep gorge area are protuberant. This kind of complicated geologic body has developed wider in these areas. S-RM has obvious difference with the general soil or rock (rock mass) in physical and mechanical properties because its two components-“soil” and “rock-block” has extreme differences in physical and mechanical properties. The proposition of S-RM and its deep research are needed in the modern engineering construction. It is also the necessity in the modern development of rock and soil mechanics. The dissertation starts from the meso-structural characteristics of soil-rock and takes a systematic research on its meso-structural mechanics, deformation and failure mechanism and the stability of S-RM slope. In summary, it achieves the following innovative results and conclusions. There are various views on the conception of S-RM and its classification system. Based on the large number of field tests, the dissertation makes the conception and classification of S-RM more systematic. It systematically proposed the conception of meso-structural mechanics of S-RM. Thus the dissertation has laid a foundation for its deep study. With the fast development of the computer technology and digital image processing theory, digital image processing technology has been successfully applied in many fields and provided reliable technology support for the quantitative description of the structural characteristics of S-RM. Based on the digital image processing technology, the dissertation systematically proposes and developed the quantitative analysis method and quantitative index for the meso-structure of S-RM. The results indicate that the meso-structure such as its internal soil-rock granularity composition, the soil-rock shape and the orientability has obvious self-organization in the macro statistical level. The dissertation makes a systematic research on the physical mechanical properties, deformation and failure mechanism of S-RM based on large field test. It proposes the field test for the underwater S-RM and deduces the 3D data analysis method of in-situ horizontal push-shear test. The result indicates that S-RM has significant phenomenon of shear dilatancy in the shearing process, and its dilatancy will be more obvious with the increased proportion of rock or the decreased confining pressure. The proportion of rock has great effect on the strength of S-RM and rock-block, especially the spatial position of particles with comparatively big size has great effect on the shape and spatial position of the sample shear zone. The dissertation makes some improvements in the single ring infiltration test equipment and its application on the permeability of S-RM. The results indicate that the increasing of rock-block would make it more difficult for the soil to fill in the vacuity between the rock-block and the proportion would increase which would result in the increased permeability coefficient. The dissertation builds the real meso-structural model of S-RM based on the digital image processing technology. By using geometric reconstruction technology, it transfers the structural mode represented by Binary image into CAD format, which makes it possible to introduce the present finite element analysis software to take research on numerical experimental investigation. It systematically realizes leaping research from the image,geometric mode, to meso-structural mechanics numerical experiment. By using this method, the dissertation takes large scale numerical direct-shear test on the section of S-RM. From the mesoscopic perspective, it reveals three extended modes about the shear failure plane of S-RM. Based on the real meso-structural model and by using the numerical simulation test, the character and mechanics of seepage failure of S-RM are studied. At the same time, it builds the real structural mode of the slope based on the analysis about the slope crosssection of S-RM. By using the strength reduction method, it takes the research on the stability of S-RM and gets great achievements. The three dimensional geometric reconstruction technology of rock block is proposed, which provides technical support for the reconstruction of the 3D meso-structural model of S-RM. For the first time, the dissertation builds the stochastic structure model of two-dimensional and three-dimensional polygons or polyhedron based on the stochastic simulation technique of monte carlo method. It breaks the traditional research which restricted to the random generation method of regular polygon and develops the relevant software system (R-SRM2D/3D) which has great effect on meso-structural mechanics of S-RM. Based on the R-SRM software system which randomly generates the meso-structural mode of S-RM according to the different meso-structural characteristics, the dissertation takes a series of research on numerical test of dual axis and real three-axis, systematically analyses the meso destroy system, the effects of meso-structural characteristics such as on the stone content, size composition and block directionality on the macro mechanical behavior and macro-permeability. Then it proposes the expression of the upper and lower limit for the macro-permeability coefficient of the inhomogeneous geomaterials, such as S-RM. By using the strength reduction FEM, the dissertation takes the research on the stability of the slope structural mode of the randomly formed S-RM. The results indicate that generally, the stability coefficient of S-RM slope increases with the increasing of stone content; on the condition of the same stone content, the stability coefficient of slope will be different with different size composition and the space position of large block at the internal slop has great effect on the stability. It suggests that meso-structural characteristics, especially the space position of large block should be considered when analyzing the stability of this kind of slope and strengthening design. Taking Xiazanri S-RM slope as an example, the dissertation proposes the fine modeling of complicated geologic body based on reverse engineering and the generation method of FLAC3D mode. It resolves the bottleneck problem about building the fine structural mode of three-dimensional geological body. By using FLAC3D, the dissertation takes research on the seepage field and the displacement field of Xiazanri S-RM slope in the process of reservoir water level rising and decreasing. By using strength reduction method, it analyses the three-dimension stability in the process of reservoir water level rising and decreasing. The results indicate that the slope stability firstly show downward trend in the process of reservoir water level rising and then rebound to increase; the sudden drawdown of reservoir water level has great effect on the slope stability and this effect will increase with the sudden drawdown amplitude rising. Based on the result of the rock block size analysis of S-RM, and using R-SRM2D the stochastic structure model of Xiazanri S-RM slope is built. By using strength reduction method, the stability of the stochastic structure model is analysis, the results shows that the stability factor increases significantly after considering the block.
Resumo:
Orthogonal design and uniform design were used for the optimization of separation of enantiomers using 2,6-di-O-methyl-beta-cyclodextrin (DM-beta-CD) as a chiral selector by capillary zone electrophoresis, The concentration of DM-beta-CD, buffer pH, running voltage, and capillary temperature were selected as variable parameters, their different effects on peak resolution were studied by the design methods. It was concluded that orthogonal design offers a rapid and efficient means for testing the importance of individual parameters and for determining the optimum operating conditions. However, for a large number of both factors and levels, uniform design is more efficient, The effect of addition of methanol and citric acid buffer on the separation of enantiomers was also examined.
Resumo:
We present an image-based approach to infer 3D structure parameters using a probabilistic "shape+structure'' model. The 3D shape of a class of objects may be represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes can then be estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We augment the shape model to incorporate structural features of interest; novel examples with missing structure parameters may then be reconstructed to obtain estimates of these parameters. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a dataset of thousands of pedestrian images generated from a synthetic model, we can perform accurate inference of the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
Resumo:
Marine sponge cell culture is a potential route for the sustainable production of sponge-derived bioproducts. Development of a basal culture medium is a prerequisite for the attachment, spreading, and growth of sponge cells in vitro. With the limited knowledge available on nutrient requirements for sponge cells, a series of statistical experimental designs has been employed to screen and optimize the critical nutrient components including inorganic salts (ferric ion, zinc ion, silicate, and NaCl), amino acids (glycine, glutamine, and aspartic acid), sugars (glucose, sorbitol, and sodium pyruvate), vitamin C, and mammalian cell medium (DMEM and RPMI 1640) using MTT assay in 96-well plates. The marine sponge Hymeniacidon perleve was used as a model system. Plackett-Burman design was used for the initial screening, which identified the significant factors of ferric ion, NaCl, and vitamin C. These three factors were selected for further optimization by Uniform Design and Response Surface Methodology (RSM), respectively. A basal medium was finally established, which supported an over 100% increase in viability of sponge cells.
Resumo:
A model is developed for predicting the resolution of interested component pair and calculating the optimum temperature programming condition in the comprehensive two-dimensional gas chromatography (GC x GC). Based on at least three isothermal runs, retention times and the peak widths at half-height on both dimensions are predicted for any kind of linear temperature-programmed run on the first dimension and isothermal runs on the second dimension. The calculation of the optimum temperature programming condition is based on the prediction of the resolution of "difficult-to-separate components" in a given mixture. The resolution of all the neighboring peaks on the first dimension is obtained by the predicted retention time and peak width on the first dimension, the resolution on the second dimension is calculated only for the adjacent components with un-enough resolution on the first dimension and eluted within a same modulation period on the second dimension. The optimum temperature programming condition is acquired when the resolutions of all components of interest by GC x GC separation meet the analytical requirement and the analysis time is the shortest. The validity of the model has been proven by using it to predict and optimize GC x GC temperature programming condition of an alkylpyridine mixture. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The influence of process variables (pea starch, guar gum and glycerol) on the viscosity (V), solubility (SOL), moisture content (MC), transparency (TR), Hunter parameters (L, a, and b), total color difference (ΔE), yellowness index (YI), and whiteness index (WI) of the pea starch based edible films was studied using three factors with three level Box–Behnken response surface design. The individual linear effect of pea starch, guar and glycerol was significant (p < 0.05) on all the responses. However, a value was only significantly (p < 0.05) affected by pea starch and guar gum in a positive and negative linear term, respectively. The effect of interaction of starch × glycerol was also significant (p < 0.05) on TR of edible films. Interaction between independent variables starch × guar gum had a significant impact on the b and YI values. The quadratic regression coefficient of pea starch showed a significant effect (p < 0.05) on V, MC, L, b, ΔE, YI, and WI; glycerol level on ΔE and WI; and guar gum on ΔE and SOL value. The results were analyzed by Pareto analysis of variance (ANOVA) and the second order polynomial models were developed from the experimental design with reliable and satisfactory fit with the corresponding experimental data and high coefficient of determination (R2) values (>0.93). Three-dimensional response surface plots were established to investigate the relationship between process variables and the responses. The optimized conditions with the goal of maximizing TR and minimizing SOL, YI and MC were 2.5 g pea starch, 25% glycerol and 0.3 g guar gum. Results revealed that pea starch/guar gum edible films with appropriate physical and optical characteristics can be effectively produced and successfully applied in the food packaging industry.
A simulation-based design method to transfer surface mount RF system to flip-chip die implementation
Resumo:
The flip-chip technology is a high chip density solution to meet the demand for very large scale integration design. For wireless sensor node or some similar RF applications, due to the growing requirements for the wearable and implantable implementations, flip-chip appears to be a leading technology to realize the integration and miniaturization. In this paper, flip-chip is considered as part of the whole system to affect the RF performance. A simulation based design is presented to transfer the surface mount PCB board to the flip-chip die package for the RF applications. Models are built by Q3D Extractor to extract the equivalent circuit based on the parasitic parameters of the interconnections, for both bare die and wire-bonding technologies. All the parameters and the PCB layout and stack-up are then modeled in the essential parts' design of the flip-chip RF circuit. By implementing simulation and optimization, a flip-chip package is re-designed by the parameters given by simulation sweep. Experimental results fit the simulation well for the comparison between pre-optimization and post-optimization of the bare die package's return loss performance. This design method could generally be used to transfer any surface mount PCB to flip-chip package for the RF systems or to predict the RF specifications of a RF system using the flip-chip technology.
Resumo:
A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.
Resumo:
We have developed an alternative approach to optical design which operates in the analytical domain so that an optical designer works directly with rays as analytical functions of system parameters rather than as discretely sampled polylines. This is made possible by a generalization of the proximate ray tracing technique which obtains the analytical dependence of the rays at the image surface (and ray path lengths at the exit pupil) on each system parameter. The resulting method provides an alternative direction from which to approach system optimization and supplies information which is not typically available to the system designer. In addition, we have further expanded the procedure to allow asymmetric systems and arbitrary order of approximation, and have illustrated the performance of the method through three lens design examples.
Resumo:
Electromagnetic metamaterials are artificially structured media typically composed of arrays of resonant electromagnetic circuits, the dimension and spacing of which are considerably smaller than the free-space wavelengths of operation. The constitutive parameters for metamaterials, which can be obtained using full-wave simulations in conjunction with numerical retrieval algorithms, exhibit artifacts related to the finite size of the metamaterial cell relative to the wavelength. Liu showed that the complicated, frequency-dependent forms of the constitutive parameters can be described by a set of relatively simple analytical expressions. These expressions provide useful insight and can serve as the basis for more intelligent interpolation or optimization schemes. Here, we show that the same analytical expressions can be obtained using a transfer-matrix formalism applied to a one-dimensional periodic array of thin, resonant, dielectric, or magnetic sheets. The transfer-matrix formalism breaks down, however, when both electric and magnetic responses are present in the same unit cell, as it neglects the magnetoelectric coupling between unit cells. We show that an alternative analytical approach based on the same physical model must be applied for such structures. Furthermore, in addition to the intercell coupling, electric and magnetic resonators within a unit cell may also exhibit magnetoelectric coupling. For such cells, we find an analytical expression for the effective index, which displays markedly characteristic dispersion features that depend on the strength of the coupling coefficient. We illustrate the applicability of the derived expressions by comparing to full-wave simulations on magnetoelectric unit cells. We conclude that the design of metamaterials with tailored simultaneous electric and magnetic response-such as negative index materials-will generally be complicated by potentially unwanted magnetoelectric coupling. © 2010 The American Physical Society.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
Virtual manufacturing and design assessment increasingly involve the simulation of interacting phenomena, sic. multi-physics, an activity which is very computationally intensive. This chapter describes an attempt to address the parallel issues associated with a multi-physics simulation approach based upon a range of compatible procedures operating on one mesh using a single database - the distinct physics solvers can operate separately or coupled on sub-domains of the whole geometric space. Moreover, the finite volume unstructured mesh solvers use different discretization schemes (and, particularly, different ‘nodal’ locations and control volumes). A two-level approach to the parallelization of this simulation software is described: the code is restructured into parallel form on the basis of the mesh partitioning alone, that is, without regard to the physics. However, at run time, the mesh is partitioned to achieve a load balance, by considering the load per node/element across the whole domain. The latter of course is determined by the problem specific physics at a particular location.
Resumo:
As announced in the November 2000 issue of MathStats&OR [1], one of the projects supported by the Maths, Stats & OR Network funds is an international survey of research into pedagogic issues in statistics and OR. I am taking the lead on this and report here on the progress that has been made during the first year. A paper giving some background to the project and describing initial thinking on how it might be implemented was presented at the 53rd session of the International Statistical Institute in Seoul, Korea, in August 2001 in a session on The future of statistics education research [2]. It sounded easy. I considered that I was something of an expert on surveys having lectured on the topic for many years and having helped students and others who were doing surveys, particularly with the design of their questionnaires. Surely all I had to do was to draft a few questions, send them electronically to colleagues in statistical education who would be only to happy to respond, and summarise their responses? I should have learnt from my experience of advising all those students who thought that doing a survey was easy and to whom I had to explain that their ideas were too ambitious. There are several inter-related stages in survey research and it is important to think about these before rushing into the collection of data. In the case of the survey in question, this planning stage revealed several challenges. Surveys are usually done for a purpose so even before planning how to do them, it is advisable to think about the final product and the dissemination of results. This is the route I followed.
Resumo:
The aim of integrating computational mechanics (FEA and CFD) and optimization tools is to speed up dramatically the design process in different application areas concerning reliability in electronic packaging. Design engineers in the electronics manufacturing sector may use these tools to predict key design parameters and configurations (i.e. material properties, product dimensions, design at PCB level. etc) that will guarantee the required product performance. In this paper a modeling strategy coupling computational mechanics techniques with numerical optimization is presented and demonstrated with two problems. The integrated modeling framework is obtained by coupling the multi-physics analysis tool PHYSICA - with the numerical optimization package - Visua/DOC into a fuJly automated design tool for applications in electronic packaging. Thermo-mechanical simulations of solder creep deformations are presented to predict flip-chip reliability and life-time under thermal cycling. Also a thermal management design based on multi-physics analysis with coupled thermal-flow-stress modeling is discussed. The Response Surface Modeling Approach in conjunction with Design of Experiments statistical tools is demonstrated and used subsequently by the numerical optimization techniques as a part of this modeling framework. Predictions for reliable electronic assemblies are achieved in an efficient and systematic manner.