16 resultados para Redundant manipulators

em Cambridge University Engineering Department Publications Database


Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper advances the proposition that in many electronic products, the partitioning scheme adopted and the interconnection system used to interconnect the sub-assemblies or components are intimately related to the economic benefits, and hence the attractiveness, of reuse of these items. An architecture has been developed in which the residual values of the connectors, components and sub-assemblies are maximized, and opportunities for take-back and reuse of redundant items are greatly enhanced. The system described also offers significant manufacturing cost benefits in terms of ease of assembly, compactness and robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a novel cortex-inspired feed-forward hierarchical object recognition system based on complex wavelets is proposed and tested. Complex wavelets contain three key properties for object representation: shift invariance, which enables the extraction of stable local features; good directional selectivity, which simplifies the determination of image orientations; and limited redundancy, which allows for efficient signal analysis using the multi-resolution decomposition offered by complex wavelets. In this paper, we propose a complete cortex-inspired object recognition system based on complex wavelets. We find that the implementation of the HMAX model for object recognition in [1, 2] is rather over-complete and includes too much redundant information and processing. We have optimized the structure of the model to make it more efficient. Specifically, we have used the Caltech 5 standard dataset to compare with Serre's model in [2] (which employs Gabor filter bands). Results demonstrate that the complex wavelet model achieves a speed improvement of about 4 times over the Serre model and gives comparable recognition performance. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the manual labor needed to create the geometric building information model (BIM) of an existing facility is spent converting raw point cloud data (PCD) to a BIM description. Automating this process would drastically reduce the modeling cost. Surface extraction from PCD is a fundamental step in this process. Compact modeling of redundant points in PCD as a set of planes leads to smaller file size and fast interactive visualization on cheap hardware. Traditional approaches for smooth surface reconstruction do not explicitly model the sparse scene structure or significantly exploit the redundancy. This paper proposes a method based on sparsity-inducing optimization to address the planar surface extraction problem. Through sparse optimization, points in PCD are segmented according to their embedded linear subspaces. Within each segmented part, plane models can be estimated. Experimental results on a typical noisy PCD demonstrate the effectiveness of the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Videogrammetry is an inexpensive and easy-to-use technology for spatial 3D scene recovery. When applied to large scale civil infrastructure scenes, only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames requires careful consideration. Videotaping a built infrastructure scene results in large video files filled with blurry, noisy, or redundant frames. This is due to frame rate to camera speed ratios that are often higher than necessary; camera and lens imperfections and limitations that result in imaging noise; and occasional jerky motions of the camera that result in motion blur; all of which can significantly affect the performance of the videogrammetric pipeline. To tackle these issues, this paper proposes a novel method for automating the selection of an optimized number of informative, high quality frames. According to this method, as the first step, blurred frames are removed using the thresholds determined based on a minimum level of frame quality required to obtain robust results. Then, an optimum number of key frames are selected from the remaining frames using the selection criteria devised by the authors. Experimental results show that the proposed method outperforms existing methods in terms of improved 3D reconstruction results, while maintaining the optimum number of extracted frames needed to generate high quality 3D point clouds.© 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To reduce the surgical trauma to the patient, minimally invasive surgery is gaining considerable importance since the eighties. More recently, robot assisted minimally invasive surgery was introduced to enhance the surgeon's performance in these procedures. This resulted in an intensive research on the design, fabrication and control of surgical robots over the last decades. A new development in the field of surgical tool manipulators is presented in this article: a flexible manipulator with distributed degrees of freedom powered by microhydraulic actuators. The tool consists of successive flexible segments, each with two bending degrees of freedom. To actuate these compliant segments, dedicated fluidic actuators are incorporated, together with compact hydraulic valves which control the actuator motion. Especially the development of microvalves for this application was challenging, and are the main focus of this paper. The valves distribute the hydraulic power from one common high pressure supply to a series of artificial muscle actuators. Tests show that the angular stroke of the each segment of this medical instrument is 90°. © 2012 Springer Science+Business Media, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two adaptive numerical modelling techniques have been applied to prediction of fatigue thresholds in Ni-base superalloys. A Bayesian neural network and a neurofuzzy network have been compared, both of which have the ability to automatically adjust the network's complexity to the current dataset. In both cases, despite inevitable data restrictions, threshold values have been modelled with some degree of success. However, it is argued in this paper that the neurofuzzy modelling approach offers real benefits over the use of a classical neural network as the mathematical complexity of the relationships can be restricted to allow for the paucity of data, and the linguistic fuzzy rules produced allow assessment of the model without extensive interrogation and examination using a hypothetical dataset. The additive neurofuzzy network structure means that redundant inputs can be excluded from the model and simple sub-networks produced which represent global output trends. Both of these aspects are important for final verification and validation of the information extracted from the numerical data. In some situations neurofuzzy networks may require less data to produce a stable solution, and may be easier to verify in the light of existing physical understanding because of the production of transparent linguistic rules. © 1999 Elsevier Science S.A.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents new methods for computing the step sizes of the subband-adaptive iterative shrinkage-thresholding algorithms proposed by Bayram & Selesnick and Vonesch & Unser. The method yields tighter wavelet-domain bounds of the system matrix, thus leading to improved convergence speeds. It is directly applicable to non-redundant wavelet bases, and we also adapt it for cases of redundant frames. It turns out that the simplest and most intuitive setting for the step sizes that ignores subband aliasing is often satisfactory in practice. We show that our methods can be used to advantage with reweighted least squares penalty functions as well as L1 penalties. We emphasize that the algorithms presented here are suitable for performing inverse filtering on very large datasets, including 3D data, since inversions are applied only to diagonal matrices and fast transforms are used to achieve all matrix-vector products.