449 resultados para instersection computation
Resumo:
This paper aims to develop a meshless approach based on the Point Interpolation Method (PIM) for numerical simulation of a space fractional diffusion equation. Two fully-discrete schemes for the one-dimensional space fractional diffusion equation are obtained by using the PIM and the strong-forms of the space diffusion equation. Numerical examples with different nodal distributions are studied to validate and investigate the accuracy and efficiency of the newly developed meshless approach.
Resumo:
The "Humies" awards are an annual competition held in conjunction with the Genetic and Evolutionary Computation Conference (GECCO), in which cash prizes totalling $10,000 are awarded to the most human-competitive results produced by any form of evolutionary computation published in the previous year. This article describes the gold medal-winning entry from the 2012 "Humies" competition, based on the LUDI system for playing, evaluating and creating new board games. LUDI was able to demonstrate human-competitive results in evolving novel board games that have gone on to be commercially published, one of which, Yavalath, has been ranked in the top 2.5% of abstract board games ever invented. Further evidence of human-competitiveness was demonstrated in the evolved games implicitly capturing several principles of good game design, outperforming human designers in at least one case, and going on to inspire a new sub-genre of games.
Resumo:
Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.
Resumo:
In this paper, we derive a new nonlinear two-sided space-fractional diffusion equation with variable coefficients from the fractional Fick’s law. A semi-implicit difference method (SIDM) for this equation is proposed. The stability and convergence of the SIDM are discussed. For the implementation, we develop a fast accurate iterative method for the SIDM by decomposing the dense coefficient matrix into a combination of Toeplitz-like matrices. This fast iterative method significantly reduces the storage requirement of O(n2)O(n2) and computational cost of O(n3)O(n3) down to n and O(nlogn)O(nlogn), where n is the number of grid points. The method retains the same accuracy as the underlying SIDM solved with Gaussian elimination. Finally, some numerical results are shown to verify the accuracy and efficiency of the new method.
Resumo:
The maximum principle for the space and time–space fractional partial differential equations is still an open problem. In this paper, we consider a multi-term time–space Riesz–Caputo fractional differential equations over an open bounded domain. A maximum principle for the equation is proved. The uniqueness and continuous dependence of the solution are derived. Using a fractional predictor–corrector method combining the L1 and L2 discrete schemes, we present a numerical method for the specified equation. Two examples are given to illustrate the obtained results.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
In this paper, we consider a two-sided space-fractional diffusion equation with variable coefficients on a finite domain. Firstly, based on the nodal basis functions, we present a new fractional finite volume method for the two-sided space-fractional diffusion equation and derive the implicit scheme and solve it in matrix form. Secondly, we prove the stability and convergence of the implicit fractional finite volume method and conclude that the method is unconditionally stable and convergent. Finally, some numerical examples are given to show the effectiveness of the new numerical method, and the results are in excellent agreement with theoretical analysis.
Resumo:
Purpose – In structural, earthquake and aeronautical engineering and mechanical vibration, the solution of dynamic equations for a structure subjected to dynamic loading leads to a high order system of differential equations. The numerical methods are usually used for integration when either there is dealing with discrete data or there is no analytical solution for the equations. Since the numerical methods with more accuracy and stability give more accurate results in structural responses, there is a need to improve the existing methods or develop new ones. The paper aims to discuss these issues. Design/methodology/approach – In this paper, a new time integration method is proposed mathematically and numerically, which is accordingly applied to single-degree-of-freedom (SDOF) and multi-degree-of-freedom (MDOF) systems. Finally, the results are compared to the existing methods such as Newmark’s method and closed form solution. Findings – It is concluded that, in the proposed method, the data variance of each set of structural responses such as displacement, velocity, or acceleration in different time steps is less than those in Newmark’s method, and the proposed method is more accurate and stable than Newmark’s method and is capable of analyzing the structure at fewer numbers of iteration or computation cycles, hence less time-consuming. Originality/value – A new mathematical and numerical time integration method is proposed for the computation of structural responses with higher accuracy and stability, lower data variance, and fewer numbers of iterations for computational cycles.
Resumo:
When verifying or reverse-engineering digital circuits, one often wants to identify and understand small components in a larger system. A possible approach is to show that the sub-circuit under investigation is functionally equivalent to a reference implementation. In many cases, this task is difficult as one may not have full information about the mapping between input and output of the two circuits, or because the equivalence depends on settings of control inputs. We propose a template-based approach that automates this process. It extracts a functional description for a low-level combinational circuit by showing it to be equivalent to a reference implementation, while synthesizing an appropriate mapping of input and output signals and setting of control signals. The method relies on solving an exists/forall problem using an SMT solver, and on a pruning technique based on signature computation.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
Resumo:
This paper presents visual detection and classification of light vehicles and personnel on a mine site.We capitalise on the rapid advances of ConvNet based object recognition but highlight that a naive black box approach results in a significant number of false positives. In particular, the lack of domain specific training data and the unique landscape in a mine site causes a high rate of errors. We exploit the abundance of background-only images to train a k-means classifier to complement the ConvNet. Furthermore, localisation of objects of interest and a reduction in computation is enabled through region proposals. Our system is tested on over 10km of real mine site data and we were able to detect both light vehicles and personnel. We show that the introduction of our background model can reduce the false positive rate by an order of magnitude.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.
Resumo:
This thesis introduces a new way of using prior information in a spatial model and develops scalable algorithms for fitting this model to large imaging datasets. These methods are employed for image-guided radiation therapy and satellite based classification of land use and water quality. This study has utilized a pre-computation step to achieve a hundredfold improvement in the elapsed runtime for model fitting. This makes it much more feasible to apply these models to real-world problems, and enables full Bayesian inference for images with a million or more pixels.