983 resultados para Proximal Point Algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the authors investigate the use of optimal control techniques for improving the efficiency of the power conversion system in a point absorber wave power device. A simple mathematical model of the system is developed and an optimal control strategy for power generation is determined. They describe an algorithm for solving the problem numerically, provided the incident wave force is given. The results show that the performance of the device is significantly improved with the handwidth of the response being widened by the control strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields uj, j = 1, ..., n of sound sources supported in different bounded domains G1, ..., Gn in from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions , to construct uℓ for ℓ = 1, ..., n from u|Λ in the form We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary meta-algorithms for pulse shaping of broadband femtosecond duration laser pulses are proposed. The genetic algorithm searching the evolutionary landscape for desired pulse shapes consists of a population of waveforms (genes), each made from two concatenated vectors, specifying phases and magnitudes, respectively, over a range of frequencies. Frequency domain operators such as mutation, two-point crossover average crossover, polynomial phase mutation, creep and three-point smoothing as well as a time-domain crossover are combined to produce fitter offsprings at each iteration step. The algorithm applies roulette wheel selection; elitists and linear fitness scaling to the gene population. A differential evolution (DE) operator that provides a source of directed mutation and new wavelet operators are proposed. Using properly tuned parameters for DE, the meta-algorithm is used to solve a waveform matching problem. Tuning allows either a greedy directed search near the best known solution or a robust search across the entire parameter space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Northern Hemisphere cyclone activity is assessed by applying an algorithm for the detection and tracking of synoptic scale cyclones to mean sea level pressure data. The method, originally developed for the Southern Hemisphere, is adapted for application in the Northern Hemisphere winter season. NCEP-Reanalysis data from 1958/59 to 1997/98 are used as input. The sensitivities of the results to particular parameters of the algorithm are discussed for both case studies and from a climatological point of view. Results show that the choice of settings is of major relevance especially for the tracking of smaller scale and fast moving systems. With an appropriate setting the algorithm is capable of automatically tracking different types of cyclones at the same time: Both fast moving and developing systems over the large ocean basins and smaller scale cyclones over the Mediterranean basin can be assessed. The climatology of cyclone variables, e.g., cyclone track density, cyclone counts, intensification rates, propagation speeds and areas of cyclogenesis and -lysis gives detailed information on typical cyclone life cycles for different regions. The lowering of the spatial and temporal resolution of the input data from full resolution T62/06h to T42/12h decreases the cyclone track density and cyclone counts. Reducing the temporal resolution alone contributes to a decline in the number of fast moving systems, which is relevant for the cyclone track density. Lowering spatial resolution alone mainly reduces the number of weak cyclones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fish-net algorithm is a novel field learning algorithm which derives classification rules by looking at the range of values of each attribute instead of the individual point values. In this paper, we present a Feature Selection Fish-net learning algorithm to solve the Dual Imbalance problem on text classification. Dual imbalance includes the instance imbalance and feature imbalance. The instance imbalance is caused by the unevenly distributed classes and feature imbalance is due to the different document length. The proposed approach consists of two phases: (1) select a feature subset which consists of the features that are more supportive to difficult minority class; (2) construct classification rules based on the original Fish-net algorithm. Our experimental results on Reuters21578 show that the proposed approach achieves better balanced accuracy rate on both majority and minority class than Naive Bayes MultiNomial and SVM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study the downloading mechanism of BitTorrent (or BT), a P2P based popular and convenient parallel downloading software tool, point out some of its limitations, and propose an algorithm to improve its performance. In particular, we address the limitations of BT by using neighbours in P2P networks to resolve the redundant copies problem and to optimise the downloading speed. Our preliminary experiments show that the proposed enhancement algorithm works well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BitTorrent (or BT) is a P2P based popular and convenient parallel downloading software tool. In this paper, we study the downloading mechanism of BitTorrent, point out some of its limitations, and propose an algorithm to improve its performance. Two major limitations of BitTorrent are, first its downloading speed is slow at the beginning of a downloading or when there is only a few clients. Second, current algorithms cannot achieve the best
parallel downloading degree as the selection of sub-pieces is random, and a file may not be downloaded when the file provider leaves the network unexpectedly. In this paper we address these problems by using neighbours in P2P networks to resolve the redundant copies and to optimise the download speed. Our preliminary experiments show that the proposed enhancement algorithm works well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The radial return mapping algorithm within the computational context of a hybrid Finite Element and Particle-In-Cell (FE/PIC) method is constructed to allow a fluid flow FE/PIC code to be applied solid mechanic problems with large displacements and large deformations. The FE/PIC method retains the robustness of an Eulerian mesh and enables tracking of material deformation by a set of Lagrangian particles or material points. In the FE/PIC approach the particle velocities are interpolated from nodal velocities and then the particle position is updated using a suitable integration scheme, such as the 4th order Runge-Kutta scheme[1]. The strain increments are obtained from gradients of the nodal velocities at the material point positions, which are then used to evaluate the stress increment and update history variables. To obtain the stress increment from the strain increment, the nonlinear constitutive equations are solved in an incremental iterative integration scheme based on a radial return mapping algorithm[2]. A plane stress extension of a rectangular shape J2 elastoplastic material with isotropic, kinematic and combined hardening is performed as an example and for validation of the enhanced FE/PIC method. It is shown that the method is suitable for analysis of problems in crystal plasticity and metal forming. The method is specifically suitable for simulation of neighbouring microstructural phases with different constitutive equations in a multiscale material modelling framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polygon and point based models dominate virtual reality. These models also affect haptic rendering algorithms, which are often based on collision with polygons. With application to dual point haptic devices for operations like grasping, complex polygon and point based models will make the collision detection procedure slow. This results in the system not able to achieve interactivity for force rendering. To solve this issue, we use mathematical functions to define and implement geometry (curves, surfaces and solid objects), visual appearance (3D colours and geometric textures) and various tangible physical properties (elasticity, friction, viscosity, and force fields). The function definitions are given as analytical formulas (explicit, implicit and parametric), function scripts and procedures. We proposed an algorithm for haptic rendering of virtual scenes including mutually penetrating objects with different sizes and arbitrary location of the observer without a prior knowledge of the scene to be rendered. The algorithm is based on casting multiple haptic rendering rays from the Haptic Interaction Point (HIP), and it builds a stack to keep track on all colliding objects with the HIP. The algorithm uses collision detection based on implicit function representation of the object surfaces. The proposed approach allows us to be flexible when choosing the actual rendering platform, while it can also be easily adopted for dual point haptic collision detection as well as force and torque rendering. The function-defined objects and parts constituting them can be used together with other common definitions of virtual objects such as polygon meshes, point sets, voxel volumes, etc. We implemented an extension of X3D and VRML as well as several standalone application examples to validate the proposed methodology. Experiments show that our concern about fast, accurate rendering as well as compact representation could be fulfilled in various application scenarios and on both single and dual point haptic devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study optimization methods for minimizing large-scale pseudoconvex L∞problems in multiview geometry. We present a novel algorithm for solving this class of problem based on proximal splitting methods. We provide a brief derivation of the proposed method along with a general convergence analysis. The resulting meta-algorithm requires very little effort in terms of implementation and instead makes use of existing advanced solvers for non-linear optimization. Preliminary experiments on a number of real image datasets indicate that the proposed method experimentally matches or outperforms current state-of-the-art solvers for this class of problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a basic frame for rehabilitation motion practice system which detects 3D motion trajectory with the Microsoft Kinect (MSK) sensor system and proposes a cost-effective 3D motion matching algorithm. The rehabilitation motion practice system displays a reference 3D motion in the database system that the player (patient) tries to follow. The player’s motion is traced by the MSK sensor system and then compared with the reference motion to evaluate how well the player follows the reference motion. In this system, 3D motion matching algorithm is a key feature for accurate evaluation for player’s performance. Even though similarity measurement of 3D trajectories is one of the most important tasks in 3D motion analysis, existing methods are still limited. Recent researches focus on the full length 3D trajectory data set. However, it is not true that every point on the trajectory plays the same role and has the same meaning. In this situation, we developed a new cost-effective method that only uses the less number of features called ‘signature’ which is a flexible descriptor computed from the region of ‘elbow points’. Therefore, our proposed method runs faster than other methods which use the full length trajectory information. The similarity of trajectories is measured based on the signature using an alignment method such as dynamic time warping (DTW), continuous dynamic time warping (CDTW) or longest common sub-sequence (LCSS) method. In the experimental studies, we applied the MSK sensor system to detect, trace and match the 3D motion of human body. This application was assumed as a system for guiding a rehabilitation practice which can evaluate how well the motion practice was performed based on comparison of the patient’s practice motion traced by the MSK system with the pre-defined reference motion in a database. In order to evaluate the accuracy of our 3D motion matching algorithm, we compared our method with two other methods using Australian sign word dataset. As a result, our matching algorithm outperforms in matching 3D motion, and it can be exploited for a base framework for various 3D motion-based applications at low cost with high accuracy.