986 resultados para Parallel methods
Resumo:
This report addresses the problem of acquiring objects using articulated robotic hands. Standard grasps are used to make the problem tractable, and a technique is developed for generalizing these standard grasps to increase their flexibility to variations in the problem geometry. A generalized grasp description is applied to a new problem situation using a parallel search through hand configuration space, and the result of this operation is a global overview of the space of good solutions. The techniques presented in this report have been implemented, and the results are verified using the Salisbury three-finger robotic hand.
Resumo:
Bibliography: p. 24.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
This letter presents a new parallel method for hyperspectral unmixing composed by the efficient combination of two popular methods: vertex component analysis (VCA) and sparse unmixing by variable splitting and augmented Lagrangian (SUNSAL). First, VCA extracts the endmember signatures, and then, SUNSAL is used to estimate the abundance fractions. Both techniques are highly parallelizable, which significantly reduces the computing time. A design for the commodity graphics processing units of the two methods is presented and evaluated. Experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 100 times, which grants real-time response required by many remotely sensed hyperspectral applications.
Resumo:
Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.
Resumo:
In this paper, a new parallel method for sparse spectral unmixing of remotely sensed hyperspectral data on commodity graphics processing units (GPUs) is presented. A semi-supervised approach is adopted, which relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. This method is based on the spectral unmixing by splitting and augmented Lagrangian (SUNSAL) that estimates the material's abundance fractions. The parallel method is performed in a pixel-by-pixel fashion and its implementation properly exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for simulated and real hyperspectral datasets reveal significant speedup factors, up to 1 64 times, with regards to optimized serial implementation.
Resumo:
A new parallel approach for solving a pentadiagonal linear system is presented. The parallel partition method for this system and the TW parallel partition method on a chain of P processors are introduced and discussed. The result of this algorithm is a reduced pentadiagonal linear system of order P \Gamma 2 compared with a system of order 2P \Gamma 2 for the parallel partition method. More importantly the new method involves only half the number of communications startups than the parallel partition method (and other standard parallel methods) and hence is a far more efficient parallel algorithm.
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
Cheap and massively parallel methods to assess the DNA-binding specificity of transcription factors are actively sought, given their prominent regulatory role in cellular processes and diseases. Here we evaluated the use of protein-binding microarrays (PBM) to probe the association of the tumor suppressor AP2α with 6000 human genomic DNA regulatory sequences. We show that the PBM provides accurate relative binding affinities when compared to quantitative surface plasmon resonance assays. A PBM-based study of human healthy and breast tumor tissue extracts allowed the identification of previously unknown AP2α target genes and it revealed genes whose direct or indirect interactions with AP2α are affected in the diseased tissues. AP2α binding and regulation was confirmed experimentally in human carcinoma cells for novel target genes involved in tumor progression and resistance to chemotherapeutics, providing a molecular interpretation of AP2α role in cancer chemoresistance. Overall, we conclude that this approach provides quantitative and accurate assays of the specificity and activity of tumor suppressor and oncogenic proteins in clinical samples, interfacing genomic and proteomic assays.
Resumo:
Aim: 5-fluoro-2'-deoxyuridine (FdUrd) depletes the endogenous 5'-deoxythymidine triphosphate (dTTP) pool. We hypothesized whether uptake of exogenous dThd analogues could be favoured through a feedback enhanced salvage pathway and studied the FdUrd effect on cellular uptake of 3'-deoxy-3'-18F-fluorothymidine (18F-FLT) and 5-125I-iodo-2'-deoxyuridine (125I-IdUrd) in different cancer cell lines in parallel. Methods: Cell uptake of 18F-FLT and 125I-IdUrd was studied in 2 human breast, 2 colon cancer and 2 glioblastoma lines. Cells were incubated with/without 1 µmol/l FdUrd for 1 h and, after washing, with 1.2 MBq 18F-FLT or 125I-IdUrd for 0.3 to 2 h. Cell bound 18F-FLT and 125I-IdUrd was counted and expressed in % incubated activity (%IA). Kinetics of 18F-FLT cell uptake and release were studied with/without FdUrd modulation. 2'-3H-methyl-fluorothymidine (2'-3H-FLT) uptake with/without FdUrd pretreatment was tested on U87 spheroids and monolayer cells. Results: Basal uptake at 2 h of 18F-FLT and 125I-IdUrd was in the range of 0.8-1.0 and 0.4-0.6 Bq/cell, respectively. FdUrd pretreatment enhanced 18F-FLT and 125I-IdUrd uptake 1.2-2.1 and 1.7-4.4 fold, respectively, while co-incubation with excess thymidine abrogated all 18F-FLT uptake. FdUrd enhanced 18F-FLT cellular inflow in 2 breast cancer lines by factors of 1.8 and 1.6, respectively, while outflow persisted at a slightly lower rate. 2'-3H-FLT basal uptake was very low while uptake increase after FdUrd was similar in U87 monolayer cells and spheroids. Conclusions: Basal uptake of 18F-FLT was frequently higher than that of 125I-IdUrd but FdUrd induced uptake enhancement was stronger for 125I-IdUrd in five of six cell lines. 18F-FLT outflow from cells might be an explanation for the observed difference with 125I-IdUrd.
Resumo:
Particle Swarm Optimization is a metaheuristic that arose in order to simulate the behavior of a number of birds in flight, with its random movement locally, but globally determined. This technique has been widely used to address non-liner continuous problems and yet little explored in discrete problems. This paper presents the operation of this metaheuristic, and propose strategies for implementation of optimization discret problems as form of execution parallel as sequential. The computational experiments were performed to instances of the TSP, selected in the library TSPLIB contenct to 3038 nodes, showing the improvement of performance of parallel methods for their sequential versions, in executation time and results
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Magdeburg, Univ., Fak. für Naturwiss., Diss., 2014
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.