961 resultados para Fast Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Haptic devices tend to be kept small as it is easier to achieve a large change of stiffness with a low associated apparent mass. If large movements are required there is a usually a reduction in the quality of the haptic sensations which can be displayed. The typical measure of haptic device performance is impedance-width (z-width) but this does not account for actuator saturation, usable workspace or the ability to do rapid movements. This paper presents the analysis and evaluation of a haptic device design, utilizing a variant of redundant kinematics, sometimes referred to as a macro-micro configuration, intended to allow large and fast movements without loss of impedance-width. A brief mathematical analysis of the design constraints is given and a prototype system is described where the effects of different elements of the control scheme can be examined to better understand the potential benefits and trade-offs in the design. Finally, the performance of the system is evaluated using a Fitts’ Law test and found to compare favourably with similar evaluations of smaller workspace devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Any bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. The reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this research study is to determine which form of testing, the PEST algorithm or an operator-controlled condition is most accurate and time efficient for administration of the gaze stabilization test

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our numerical simulations show that the reconnection of magnetic field becomes fast in the presence of weak turbulence in the way consistent with the Lazarian and Vishniac (1999) model of fast reconnection. We trace particles within our numerical simulations and show that the particles can be efficiently accelerated via the first order Fermi acceleration. We discuss the acceleration arising from reconnection as a possible origin of the anomalous cosmic rays measured by Voyagers. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first stars that formed after the Big Bang were probably massive(1), and they provided the Universe with the first elements heavier than helium (`metals`), which were incorporated into low-mass stars that have survived to the present(2,3). Eight stars in the oldest globular cluster in the Galaxy, NGC 6522, were found to have surface abundances consistent with the gas from which they formed being enriched by massive stars(4) (that is, with higher alpha-element/Fe and Eu/Fe ratios than those of the Sun). However, the same stars have anomalously high abundances of Ba and La with respect to Fe(4), which usually arises through nucleosynthesis in low-mass stars(5) (via the slow-neutron-capture process, or s-process). Recent theory suggests that metal-poor fast-rotating massive stars are able to boost the s-process yields by up to four orders of magnitude(6), which might provide a solution to this contradiction. Here we report a reanalysis of the earlier spectra, which reveals that Y and Sr are also over-abundant with respect to Fe, showing a large scatter similar to that observed in extremely metal-poor stars(7), whereas C abundances are not enhanced. This pattern is best explained as originating in metal-poor fast-rotating massive stars, which might point to a common property of the first stellar generations and even of the `first stars`.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of the Neoproterozoic era is punctuated by two global glacial events marked by the presence of glacial deposits overlaid by cap carbonates. Duration of glacial intervals is now consistently constrained to 3-12 million years but the duration of the post-glacial transition is more controversial due to the uncertainty in cap dolostone sedimentation rates. Indeed, the presence of several stratabound magnetic reversals in Brazilian cap dolostones recently questioned the short sedimentation duration (a few thousand years at most) that was initially suggested for these rocks. Here, we present new detailed magnetostratigraphic data of the Mirassol d`Oeste cap dolostones (Mato Grosso, Brazil) and ""bomb-spike"" calibrated AMS (14)C data of microbial mats from the Lagoa Vermelha (Rio de Janeiro, Brazil). We also compile sedimentary, isotopic and microbiological data from post-Marinoan outcrops and/or recent depositional analogues in order to discuss the deposition rate of Marinoan cap dolostones and to infer an estimation of the deglaciation duration in the snowball Earth aftermath. Taken together, the various data point to a sedimentation duration in the range of a few 10(5) years. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let a > 0, Omega subset of R(N) be a bounded smooth domain and - A denotes the Laplace operator with Dirichlet boundary condition in L(2)(Omega). We study the damped wave problem {u(tt) + au(t) + Au - f(u), t > 0, u(0) = u(0) is an element of H(0)(1)(Omega), u(t)(0) = v(0) is an element of L(2)(Omega), where f : R -> R is a continuously differentiable function satisfying the growth condition vertical bar f(s) - f (t)vertical bar <= C vertical bar s - t vertical bar(1 + vertical bar s vertical bar(rho-1) + vertical bar t vertical bar(rho-1)), 1 < rho < (N - 2)/(N + 2), (N >= 3), and the dissipativeness condition limsup(vertical bar s vertical bar ->infinity) s/f(s) < lambda(1) with lambda(1) being the first eigenvalue of A. We construct the global weak solutions of this problem as the limits as eta -> 0(+) of the solutions of wave equations involving the strong damping term 2 eta A(1/2)u with eta > 0. We define a subclass LS subset of C ([0, infinity), L(2)(Omega) x H(-1)(Omega)) boolean AND L(infinity)([0, infinity), H(0)(1)(Omega) x L(2)(Omega)) of the `limit` solutions such that through each initial condition from H(0)(1)(Omega) x L(2)(Omega) passes at least one solution of the class LS. We show that the class LS has bounded dissipativeness property in H(0)(1)(Omega) x L(2)(Omega) and we construct a closed bounded invariant subset A of H(0)(1)(Omega) x L(2)(Omega), which is weakly compact in H(0)(1)(Omega) x L(2)(Omega) and compact in H({I})(s)(Omega) x H(s-1)(Omega), s is an element of [0, 1). Furthermore A attracts bounded subsets of H(0)(1)(Omega) x L(2)(Omega) in H({I})(s)(Omega) x H(s-1)(Omega), for each s is an element of [0, 1). For N = 3, 4, 5 we also prove a local uniqueness result for the case of smooth initial data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a genetic algorithm with new components to tackle capacitated lot sizing and scheduling problems with sequence dependent setups that appear in a wide range of industries, from soft drink bottling to food manufacturing. Finding a feasible solution to highly constrained problems is often a very difficult task. Various strategies have been applied to deal with infeasible solutions throughout the search. We propose a new scheme of classifying individuals based on nested domains to determine the solutions according to the level of infeasibility, which in our case represents bands of additional production hours (overtime). Within each band, individuals are just differentiated by their fitness function. As iterations are conducted, the widths of the bands are dynamically adjusted to improve the convergence of the individuals into the feasible domain. The numerical experiments on highly capacitated instances show the effectiveness of this computational tractable approach to guide the search toward the feasible domain. Our approach outperforms other state-of-the-art approaches and commercial solvers. (C) 2009 Elsevier Ltd. All rights reserved.