930 resultados para Time optimization
Resumo:
A new gas delivery system is designed and installed for HIRFL-CSR cluster target. The original blocked nozzle is replaced by a new one with the throat diameter of 0.12mm. New test of hydrogen and argon gases are performed. The stable jets can be obtained for these two operation gases. The attenuation of the jet caused by the collision with residual gas is studied. The maximum achievable H-2 target density is 1.75x10(13) atoms/cm(3) with a target thickness of 6.3x10(12) atoms/cm(2) for HIRFL-CSR cluster target. The running stability of the cluster source is tested both for hydrogen and argon. The operation parameters for obtaining hydrogen jet are optimized. The results of long time running for H-2 and Ar cluster jets look promising. The jet intensity has no essential change during the test for H-2 and Ar.
Resumo:
The traditional design of accelerator magnet usually involves many time consuming iterations of the manual analysis process. A software platform to do these iterations automatically is proposed in this paper. In this platform, we use DAKOTA (a open source software developed by Sandia National Laboratories) as the optimizing routine, which provides a variety of optimization methods and algorithms, and OPERA (software from Vector Fields) is selected as the electromagnetic simulating routine. In this paper, two examples of designs of accelerator magnets are used to illustrate how an optimization algorithm is chosen and the platform works.
Resumo:
A novel approach is proposed for the simultaneous optimization of mobile phase pH and gradient steepness in RP-HPLC using artificial neural networks. By presetting the initial and final concentration of the organic solvent, a limited number of experiments with different gradient time and pH value of mobile phase are arranged in the two-dimensional space of mobile phase parameters. The retention behavior of each solute is modeled using an individual artificial neural network. An "early stopping" strategy is adopted to ensure the predicting capability of neural networks. The trained neural networks can be used to predict the retention time of solutes under arbitrary mobile phase conditions in the optimization region. Finally, the optimal separation conditions can be found according to a global resolution function. The effectiveness of this method is validated by optimization of separation conditions for amino acids derivatised by a new fluorescent reagent.
Resumo:
In this report, gold nanoparticles (AuNPs) labeled by Raman reporters (AuNPs-R6G) were assembled on glass and used as the seeds to in situ grow silver-coated nanostructures based on silver enhancer solution, forming the nanostructures of AuNPs-R6G@Ag, which were characterized by scanning electron microscopy (SEM) and UV-visible spectroscopy. More importantly, the obtained silver-coated nanostructures can be used as a surface enhancement Raman scattering (SERS) substrate. The different SERS activities can be controlled by the silver deposition time and assembly time of AuNPs-R6G on glass. The results indicate that the maximum SERS activity could be obtained on AuNPs-R6G when these nanostructures were assembled on glass for 2 h with silver deposition for 2 min.
Resumo:
Chromosome manipulation for commercially valuable marine animals plays an important role in aquaculture. The special reproductive characteristics of shrimp make it difficult to control fertilization and synchronize egg development, so research on chromosome manipulation in shrimp has proceeded very slowly. In the present study, triploid shrimp Fenneropenaeus chinensis were induced by heat shocks and the optimal-inducing condition was screened at different spawning temperatures. Level of triploid induction for each treatment was evaluated by flow cytometry at nauplius stage. The highest level of triploid induction reached to more than 90%. Starting time for each treatment was very crucial for triploid induction in shrimp. One optimal treatment condition for triploid induction was heat shock (29-32 degreesC), starting at 18-20 min for duration of 10 min. These conditions varied depending on the temperature at spawning. Triploid level at embryo stage and nauplius stage was not different, suggesting the same hatching rate between diploids and triploids. Heat shock is a very effective way to induce triploids in this species, and can be easily used on large scale without any harmful effect on the environment as compared with chemical treatment. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Gracilaria lemaneiformis Bory is an economically important alga that is primarily used for agar production. Although tetraspores are ideal seeds for the cultivation of G. lemaneiformis, the most popular culture method is currently based on vegetative fragments, which is labor-intensive and time-consuming. In this study, we optimized the conditions for tetraspore release and evaluated the photosynthetic activities of different colonies formed from the branches of G. lemaneiformis using a PAM (pulse-amplitude-modulated) measuring system. The results showed that variations in temperature and salinityhad significant effects on tetraspore yield. However, variations in the photon flux density (from 15 mu mol m(-2) s(-1) to 480 mu mol m(-2) s(-1)) had no apparent effect on tetraspore yield. Moreover, the PAM-parameters Y(I), Y(II), ETR(I), ETR(II) and F (v)/F (m) of colonies formed from different branches showed the same trend: parameter values of first generation branches > second generation branches > third generation branches. These results suggest that the photosynthetic activities of different colonies of branches changed with the same trend. Furthermore, photosynthesis in G. lemaneiformis was found to be involved in vegetative reproduction and tetraspore formation. Finally, the first generation branches grew slowly, but accumulated organic compounds to form large numbers of tetraspores. Taken together, these results showed that the first generation branches are ideal materials for the release of tetraspores.
Resumo:
提出了一种用于工业机器人时间最优轨迹规划及轨迹控制的新方法,它可以确保在关节位移、速度、加速度以及二阶加速度边界值的约束下,机器人手部沿笛卡尔空间中规定路径运动的时间阳短。在这种方法中,所规划的关节轨迹都采用二次多项式加余弦函数的形式,不仅可以保证各关节运动的位移、速度 、加速度连续而且还可以保证各关节运动的二阶加速度连续。采用这种方法,既可以提高机器人的工作效率又可以延长机器人的工作寿命以PUMA560机器人为对象进行了计算机仿真和机器人实验,结果表明这种方法是正确的有效的。它为工业机器人在非线性运动学约束条件下的时间最优轨迹规划及控制问题提供了一种较好的解决方案。
Resumo:
There is a natural norm associated with a starting point of the homogeneous self-dual (HSD) embedding model for conic convex optimization. In this norm two measures of the HSD model’s behavior are precisely controlled independent of the problem instance: (i) the sizes of ε-optimal solutions, and (ii) the maximum distance of ε-optimal solutions to the boundary of the cone of the HSD variables. This norm is also useful in developing a stopping-rule theory for HSD-based interior-point methods such as SeDuMi. Under mild assumptions, we show that a standard stopping rule implicitly involves the sum of the sizes of the ε-optimal primal and dual solutions, as well as the size of the initial primal and dual infeasibility residuals. This theory suggests possible criteria for developing starting points for the homogeneous self-dual model that might improve the resulting solution time in practice
Resumo:
A model is developed for predicting the resolution of interested component pair and calculating the optimum temperature programming condition in the comprehensive two-dimensional gas chromatography (GC x GC). Based on at least three isothermal runs, retention times and the peak widths at half-height on both dimensions are predicted for any kind of linear temperature-programmed run on the first dimension and isothermal runs on the second dimension. The calculation of the optimum temperature programming condition is based on the prediction of the resolution of "difficult-to-separate components" in a given mixture. The resolution of all the neighboring peaks on the first dimension is obtained by the predicted retention time and peak width on the first dimension, the resolution on the second dimension is calculated only for the adjacent components with un-enough resolution on the first dimension and eluted within a same modulation period on the second dimension. The optimum temperature programming condition is acquired when the resolutions of all components of interest by GC x GC separation meet the analytical requirement and the analysis time is the shortest. The validity of the model has been proven by using it to predict and optimize GC x GC temperature programming condition of an alkylpyridine mixture. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
This paper demonstrates a modeling and design approach that couples computational mechanics techniques with numerical optimisation and statistical models for virtual prototyping and testing in different application areas concerning reliability of eletronic packages. The integrated software modules provide a design engineer in the electronic manufacturing sector with fast design and process solutions by optimizing key parameters and taking into account complexity of certain operational conditions. The integrated modeling framework is obtained by coupling the multi-phsyics finite element framework - PHYSICA - with the numerical optimisation tool - VisualDOC into a fully automated design tool for solutions of electronic packaging problems. Response Surface Modeling Methodolgy and Design of Experiments statistical tools plus numerical optimisaiton techniques are demonstrated as a part of the modeling framework. Two different problems are discussed and solved using the integrated numerical FEM-Optimisation tool. First, an example of thermal management of an electronic package on a board is illustrated. Location of the device is optimized to ensure reduced junction temperature and stress in the die subject to certain cooling air profile and other heat dissipating active components. In the second example thermo-mechanical simulations of solder creep deformations are presented to predict flip-chip reliability and subsequently used to optimise the life-time of solder interconnects under thermal cycling.
Resumo:
We consider a variety of preemptive scheduling problems with controllable processing times on a single machine and on identical/uniform parallel machines, where the objective is to minimize the total compression cost. In this paper, we propose fast divide-and-conquer algorithms for these scheduling problems. Our approach is based on the observation that each scheduling problem we discuss can be formulated as a polymatroid optimization problem. We develop a novel divide-and-conquer technique for the polymatroid optimization problem and then apply it to each scheduling problem. We show that each scheduling problem can be solved in $ \O({\rm T}_{\rm feas}(n) \times\log n)$ time by using our divide-and-conquer technique, where n is the number of jobs and Tfeas(n) denotes the time complexity of the corresponding feasible scheduling problem with n jobs. This approach yields faster algorithms for most of the scheduling problems discussed in this paper.
Resumo:
Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variableparameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.
Resumo:
There is an increasing need to identify the effect of mix composition on the rheological properties of cementitious grouts using minislump, Marsh cone, cohesion plate, washout test, and cubes to determine the fluidity, the cohesion, and other mechanical properties of grouting applications. Mixture proportioning involves the tailoring of several parameters to achieve adequate fluidity, cohesion, washout resistance and compressive strength. This paper proposes a statistical design approach using a composite fractional factorial design which was carried out to model the influence of key parameters on the performance of cement grouts. The responses relate to performance included minislump, flow time using Marsh cone, cohesion measured by Lombardi plate meter, washout mass loss and compressive strength at 3, 7, and 28 days. The statistical models are valid for mixtures with water-to-binder ratio of 0.37–0.53, 0.4–1.8% addition of high-range water reducer (HRWR) by mass of binder, 4–12% additive of silica fume as replacement of cement by mass, and 0.02–0.8% addition of viscosity modifying admixture (VMA) by mass of binder. The models enable the identification of underlying factors and interactions that influence the modeled responses of cement grout. The comparison between the predicted and measured responses indicated good accuracy of the established models to describe the effect of the independent variables on the fluidity, cohesion, washout resistance and the compressive strength. This paper demonstrates the usefulness of the models to better understand trade-offs between parameters. The multiparametric optimization is used to establish isoresponses for a desirability function for cement grout. An increase of HRWR led to an increase of fluidity and washout, a reduction in plate cohesion value, and a reduction in the Marsh cone time. An increase of VMA demonstrated a reduction of fluidity and the washout mass loss, and an increase of Marsh cone time and plate cohesion. Results indicate that the use of silica fume increased the cohesion plate and Marsh cone, and reduced the minislump. Additionally, the silica fume improved the compressive strength and the washout resistance.