988 resultados para CPU time
Resumo:
This paper demonstrates how a finite element model which exploits domain decomposition is applied to the analysis of three-phase induction motors. It is shown that a significant gain in cpu time results when compared with standard finite element analysis. Aspects of the application of the method which are particular to induction motors are considered: the means of improving the convergence of the nonlinear finite element equations; the choice of symmetrical sub-domains; the modelling of relative movement; and the inclusion of periodic boundary conditions. © 1999 IEEE.
Resumo:
In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
The loosely-coupled and dynamic nature of web services architectures has many benefits, but also leads to an increased vulnerability to denial of service attacks. While many papers have surveyed and described these vulnerabilities, they are often theoretical and lack experimental data to validate them, and assume an obsolete state of web services technologies. This paper describes experiments involving several denial of service vulnerabilities in well-known web services platforms, including Java Metro, Apache Axis, and Microsoft .NET. The results both confirm and deny the presence of some of the most well-known vulnerabilities in web services technologies. Specifically, major web services platforms appear to cope well with attacks that target memory exhaustion. However, attacks targeting CPU-time exhaustion are still effective, regardless of the victim’s platform.
Resumo:
In Australia, railway systems play a vital role in transporting the sugarcane crop from farms to mills. The sugarcane transport system is very complex and uses daily schedules, consisting of a set of locomotives runs, to satisfy the requirements of the mill and harvesters. The total cost of sugarcane transport operations is very high; over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. Efficient schedules for sugarcane transport can reduce the cost and limit the negative effects that this system can have on the raw sugar production system. There are several benefits to formulating the train scheduling problem as a blocking parallel-machine job shop scheduling (BPMJSS) problem, namely to prevent two trains passing in one section at the same time; to keep the train activities (operations) in sequence during each run (trip) by applying precedence constraints; to pass the trains on one section in the correct order (priorities of passing trains) by applying disjunctive constraints; and, to ease passing trains by solving rail conflicts by applying blocking constraints and Parallel Machine Scheduling. Therefore, the sugarcane rail operations are formulated as BPMJSS problem. A mixed integer programming and constraint programming approaches are used to describe the BPMJSS problem. The model is solved by the integration of constraint programming, mixed integer programming and search techniques. The optimality performance is tested by Optimization Programming Language (OPL) and CPLEX software on small and large size instances based on specific criteria. A real life problem is used to verify and validate the approach. Constructive heuristics and new metaheuristics including simulated annealing and tabu search are proposed to solve this complex and NP-hard scheduling problem and produce a more efficient scheduling system. Innovative hybrid and hyper metaheuristic techniques are developed and coded using C# language to improve the solutions quality and CPU time. Hybrid techniques depend on integrating heuristic and metaheuristic techniques consecutively, while hyper techniques are the complete integration between different metaheuristic techniques, heuristic techniques, or both.
Resumo:
This paper present an efficient method using system state sampling technique in Monte Carlo simulation for reliability evaluation of multi-area power systems, at Hierarchical Level One (HLI). System state sampling is one of the common methods used in Monte Carlo simulation. The cpu time and memory requirement can be a problem, using this method. Combination of analytical and Monte Carlo method known as Hybrid method, as presented in this paper, can enhance the efficiency of the solution. Incorporation of load model in this study can be utilised either by sampling or enumeration. Both cases are examined in this paper, by application of the methods on Roy Billinton Test System(RBTS).
Resumo:
Earthwork planning has been considered in this article and a generic block partitioning and modelling approach has been devised to provide strategic plans of various levels of detail. Conceptually this approach is more accurate and comprehensive than others, for instance those that are section based. In response to environmental concerns the metric for decision making was fuel consumption and emissions. Haulage distance and gradient are also included as they are important components of these metrics. Advantageously the fuel consumption metric is generic and captures the physical difficulties of travelling over inclines of different gradients, that is consistent across all hauling vehicles. For validation, the proposed models and techniques have been applied to a real world road project. The numerical investigations have demonstrated that the models can be solved with relatively little CPU time. The proposed block models also result in solutions of superior quality, i.e. they have reduced fuel consumption and cost. Furthermore the plans differ considerably from those based solely upon a distance based metric thus demonstrating a need for industry to reflect upon their current practices.
Resumo:
Mixed integer programming and parallel-machine job shop scheduling are used to solve the sugarcane rail transport scheduling problem. Constructive heuristics and metaheuristics were developed to produce a more efficient scheduling system and so reduce operating costs. The solutions were tested on small and large size problems. High-quality solutions and improved CPU time are the result of developing new hybrid techniques which consist of different ways of integrating simulated annealing and Tabu search techniques.
Resumo:
The implementation of three-phase sinusoidal pulse-width-modulated inverter control strategy using microprocessor is discussed in this paper. To save CPU time, the DMA technique is used for transferring the switching pattern from memory to the pulse amplifier and isolation circuits of individual thyristors in the inverter bridge. The method of controlling both voltage and frequency is discussed here.
Resumo:
The implementation of three-phase sinusoidal pulse-width-modulated inverter control strategy using microprocessor is discussed in this paper. To save CPU time, the DMA technique is used for transferring the switching pattern from memory to the pulse amplifier and isolation circuits of individual thyristors in the inverter bridge. The method of controlling both voltage and frequency is discussed here.
Resumo:
This paper presents a study on the uncertainty in material parameters of wave propagation responses in metallic beam structures. Special effort is made to quantify the effect of uncertainty in the wave propagation responses at high frequencies. Both the modulus of elasticity and the density are considered uncertain. The analysis is performed using a Monte Carlo simulation (MCS) under the spectral finite element method (SEM). The randomness in the material properties is characterized by three different distributions, the normal, Weibull and extreme value distributions. Their effect on wave propagation in beams is investigated. The numerical study shows that the CPU time taken for MCS under SEM is about 48 times less than for MCS under a conventional one-dimensional finite element environment for 50 kHz loading. The numerical results presented investigate effects of material uncertainties on high frequency modes. A study is performed on the usage of different beam theories and their uncertain responses due to dynamic impulse load. These studies show that even for a small coefficient of variation, significant changes in the above parameters are noticed. A number of interesting results are presented, showing the true effects of uncertainty response due to dynamic impulse load.
Resumo:
Six models (Simulators) are formulated and developed with all possible combinations of pressure and saturation of the phases as primary variables. A comparative study between six simulators with two numerical methods, conventional simultaneous and modified sequential methods are carried out. The results of the numerical models are compared with the laboratory experimental results to study the accuracy of the model especially in heterogeneous porous media. From the study it is observed that the simulator using pressure and saturation of the wetting fluid (PW, SW formulation) is the best among the models tested. Many simulators with nonwetting phase as one of the primary variables did not converge when used along with simultaneous method. Based on simulator 1 (PW, SW formulation), a comparison of different solution methods such as simultaneous method, modified sequential and adaptive solution modified sequential method are carried out on 4 test problems including heterogeneous and randomly heterogeneous problems. It is found that the modified sequential and adaptive solution modified sequential methods could save the memory by half and as also the CPU time required by these methods is very less when compared with that using simultaneous method. It is also found that the simulator with PNW and PW as the primary variable which had problem of convergence using the simultaneous method, converged using both the modified sequential method and also using adaptive solution modified sequential method. The present study indicates that pressure and saturation formulation along with adaptive solution modified sequential method is the best among the different simulators and methods tested.
Resumo:
A theoretical expression for vertical profile-of horizontal velocity in terms of its depth-average is derived based on oscillatory boundary layer theory and estuarine flow characteristics. The derived theoretical profile is then incorporated into a vertical quasi-two-dimensional model, which is proved advantageous in more physical implications and less CPU time demand. To validate the proposed model, the calculated results are compared to the field data in the Yangtze River Estuary, exhibiting good agreement with observations. The proposed quasi-two-dimensional vertical model is used to study mixing process, especially dependence of salinity distribution and salt front strength on runoff and tides in estuaries.
Resumo:
A block-based motion estimation technique is proposed which permits a less general segmentation performed using an efficient deterministic algorithm. Applied to image pairs from the Flower Garden and Table Tennis sequences, the algorithm successfully localizes motion discontinuities and detects uncovered regions. The algorithm is implemented in C on a Sun Sparcstation 20. The gradient-based motion estimation required 28.8 s CPU time, and 500 iterations of the segmentation algorithm required 32.6 s.
Resumo:
This paper is concerned with the modelling of strategic interactions between the human driver and the vehicle active front steering (AFS) controller in a path-following task where the two controllers hold different target paths. The work is aimed at extending the use of mathematical models in representing driver steering behaviour in complicated driving situations. Two game theoretic approaches, namely linear quadratic game and non-cooperative model predictive control (non-cooperative MPC), are used for developing the driver-AFS interactive steering control model. For each approach, the open-loop Nash steering control solution is derived; the influences of the path-following weights, preview and control horizons, driver time delay and arm neuromuscular system (NMS) dynamics are investigated, and the CPU time consumed is recorded. It is found that the two approaches give identical time histories as well as control gains, while the non-cooperative MPC method uses much less CPU time. Specifically, it is observed that the introduction of weight on the integral of vehicle lateral displacement error helps to eliminate the steady-state path-following error; the increase in preview horizon and NMS natural frequency and the decline in time delay and NMS damping ratio improve the path-following accuracy. © 2013 Copyright Taylor and Francis Group, LLC.