463 resultados para Computational time
em Queensland University of Technology - ePrints Archive
Resumo:
Currently, finite element analyses are usually done by means of commercial software tools. Accuracy of analysis and computational time are two important factors in efficiency of these tools. This paper studies the effective parameters in computational time and accuracy of finite element analyses performed by ANSYS and provides the guidelines for the users of this software whenever they us this software for study on deformation of orthopedic bone plates or study on similar cases. It is not a fundamental scientific study and only shares the findings of the authors about structural analysis by means of ANSYS workbench. It gives an idea to the readers about improving the performance of the software and avoiding the traps. The solutions provided in this paper are not the only possible solutions of the problems and in similar cases there are other solutions which are not given in this paper. The parameters of solution method, material model, geometric model, mesh configuration, number of the analysis steps, program controlled parameters and computer settings are discussed through thoroughly in this paper.
Resumo:
This paper illustrates the use of finite element (FE) technique to investigate the behaviour of laminated glass (LG) panels under blast loads. Two and three dimensional (2D and 3D) modelling approaches available in LS-DYNA FE code to model LG panels are presented. Results from the FE analysis for mid-span deflection and principal stresses compared well with those from large deflection plate theory. The FE models are further validated using the results from a free field blast test on a LG panel. It is evident that both 2D and 3D LG models predict the experimental results with reasonable accuracy. The 3D LG models give slightly more accurate results but require considerably more computational time compared to the 2D LG models.
Resumo:
This paper describes a novel framework for facial expression recognition from still images by selecting, optimizing and fusing ‘salient’ Gabor feature layers to recognize six universal facial expressions using the K nearest neighbor classifier. The recognition comparisons with all layer approach using JAFFE and Cohn-Kanade (CK) databases confirm that using ‘salient’ Gabor feature layers with optimized sizes can achieve better recognition performance and dramatically reduce computational time. Moreover, comparisons with the state of the art performances demonstrate the effectiveness of our approach.
Resumo:
A number of game strategies have been developed in past decades and used in the fields of economics, engineering, computer science, and biology due to their efficiency in solving design optimization problems. In addition, research in multiobjective and multidisciplinary design optimization has focused on developing a robust and efficient optimization method so it can produce a set of high quality solutions with less computational time. In this paper, two optimization techniques are considered; the first optimization method uses multifidelity hierarchical Pareto-optimality. The second optimization method uses the combination of game strategies Nash-equilibrium and Pareto-optimality. This paper shows how game strategies can be coupled to multiobjective evolutionary algorithms and robust design techniques to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid and non-Hybrid-Game strategies are demonstrated.
Resumo:
Feature extraction and selection are critical processes in developing facial expression recognition (FER) systems. While many algorithms have been proposed for these processes, direct comparison between texture, geometry and their fusion, as well as between multiple selection algorithms has not been found for spontaneous FER. This paper addresses this issue by proposing a unified framework for a comparative study on the widely used texture (LBP, Gabor and SIFT) and geometric (FAP) features, using Adaboost, mRMR and SVM feature selection algorithms. Our experiments on the Feedtum and NVIE databases demonstrate the benefits of fusing geometric and texture features, where SIFT+FAP shows the best performance, while mRMR outperforms Adaboost and SVM. In terms of computational time, LBP and Gabor perform better than SIFT. The optimal combination of SIFT+FAP+mRMR also exhibits a state-of-the-art performance.
Resumo:
Capacity probability models of generating units are commonly used in many power system reliability studies, at hierarchical level one (HLI). Analytical modelling of a generating system with many units or generating units with many derated states in a system, can result in an extensive number of states in the capacity model. Limitations on available memory and computational time of present computer facilities can pose difficulties for assessment of such systems in many studies. A cluster procedure using the nearest centroid sorting method was used for IEEE-RTS load model. The application proved to be very effective in producing a highly similar model with substantially fewer states. This paper presents an extended application of the clustering method to include capacity probability representation. A series of sensitivity studies are illustrated using IEEE-RTS generating system and load models. The loss of load and energy expectations (LOLE, LOEE), are used as indicators to evaluate the application
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
Railway crew scheduling problem is the process of allocating train services to the crew duties based on the published train timetable while satisfying operational and contractual requirements. The problem is restricted by many constraints and it belongs to the class of NP-hard. In this paper, we develop a mathematical model for railway crew scheduling with the aim of minimising the number of crew duties by reducing idle transition times. Duties are generated by arranging scheduled trips over a set of duties and sequentially ordering the set of trips within each of duties. The optimisation model includes the time period of relief opportunities within which a train crew can be relieved at any relief point. Existing models and algorithms usually only consider relieving a crew at the beginning of the interval of relief opportunities which may be impractical. This model involves a large number of decision variables and constraints, and therefore a hybrid constructive heuristic with the simulated annealing search algorithm is applied to yield an optimal or near-optimal schedule. The performance of the proposed algorithms is evaluated by applying computational experiments on randomly generated test instances. The results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time for large-sized problems.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
The use of graphical processing unit (GPU) parallel processing is becoming a part of mainstream statistical practice. The reliance of Bayesian statistics on Markov Chain Monte Carlo (MCMC) methods makes the applicability of parallel processing not immediately obvious. It is illustrated that there are substantial gains in improved computational time for MCMC and other methods of evaluation by computing the likelihood using GPU parallel processing. Examples use data from the Global Terrorism Database to model terrorist activity in Colombia from 2000 through 2010 and a likelihood based on the explicit convolution of two negative-binomial processes. Results show decreases in computational time by a factor of over 200. Factors influencing these improvements and guidelines for programming parallel implementations of the likelihood are discussed.
Resumo:
Railway capacity determination and expansion are very important topics. In prior research, the competition between different entities such as train services and train types, on different network corridors however have been ignored, poorly modelled, or else assumed to be static. In response, a comprehensive set of multi-objective models have been formulated in this article to perform a trade-off analysis. These models determine the total absolute capacity of railway networks as the most equitable solution according to a clearly defined set of competing objectives. The models also perform a sensitivity analysis of capacity with respect to those competing objectives. The models have been extensively tested on a case study and their significant worth is shown. The models were solved using a variety of techniques however an adaptive E constraint method was shown to be most superior. In order to identify only the best solution, a Simulated Annealing meta-heuristic was implemented and tested. However a linearization technique based upon separable programming was also developed and shown to be superior in terms of solution quality but far less in terms of computational time.
Resumo:
This paper investigates several competing procedures for computing the prices of vanilla European options, such as puts, calls and binaries, in which the underlying model has a characteristic function that is known in semi-closed form. The algorithms investigated here are the half-range Fourier cosine series, the half-range Fourier sine series and the full-range Fourier series. Their performance is assessed in simulation experiments in which an analytical solution is available and also for a simple affine model of stochastic volatility in which there is no closed-form solution. The results suggest that the half-range sine series approximation is the least effective of the three proposed algorithms. It is rather more difficult to distinguish between the performance of the halfrange cosine series and the full-range Fourier series. However there are two clear differences. First, when the interval over which the density is approximated is relatively large, the full-range Fourier series is at least as good as the half-range Fourier cosine series, and outperforms the latter in pricing out-of-the-money call options, in particular with maturities of three months or less. Second, the computational time required by the half-range Fourier cosine series is uniformly longer than that required by the full-range Fourier series for an interval of fixed length. Taken together,these two conclusions make a case for pricing options using a full-range range Fourier series as opposed to a half-range Fourier cosine series if a large number of options are to be priced in as short a time as possible.
Resumo:
Game strategies have been developed in past decades and used in the field of economics, engineering, computer science and biology due to their efficiency in solving design optimisation problems. In addition, research on Multi-Objective (MO) and Multidisciplinary Design Optimisation (MDO) has focused on developing robust and efficient optimisation method to produce quality solutions with less computational time. In this paper, a new optimisation method Hybrid Game Strategy for MO problems is introduced and compared to CMA-ES based optimisation approach. Numerical results obtained from both optimisation methods are compared in terms of computational expense and model quality. The benefits of using Game-strategies are demonstrated.
Resumo:
Diffusion equations that use time fractional derivatives are attractive because they describe a wealth of problems involving non-Markovian Random walks. The time fractional diffusion equation (TFDE) is obtained from the standard diffusion equation by replacing the first-order time derivative with a fractional derivative of order α ∈ (0, 1). Developing numerical methods for solving fractional partial differential equations is a new research field and the theoretical analysis of the numerical methods associated with them is not fully developed. In this paper an explicit conservative difference approximation (ECDA) for TFDE is proposed. We give a detailed analysis for this ECDA and generate discrete models of random walk suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation. The stability and convergence of the ECDA for TFDE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.