865 resultados para Ant-based algorithm
Resumo:
This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable. © 2006 IEEE.
Resumo:
This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural land-marks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable. ©2005 IEEE.
Resumo:
The stability of a soil slope is usually analyzed by limit equilibrium methods, in which the identification of the critical slip surface is of principal importance. In this study the spline curve in conjunction with a genetic algorithm is used to search the critical slip surface, and Spencer's method is employed to calculate the factor of safety. Three examples are presented to illustrate the reliability and efficiency of the method. Slip surfaces defined by a series of straight lines are compared with those defined by spline curves, and the results indicate that use of spline curves renders better results for a given number of slip surface nodal points comparing with the approximation using straight line segments.
Resumo:
373 p. : il., gráf., fot., tablas
Resumo:
Coarse Particle sedimentation is studied by using an algorithm with no adjustable parameters based on stokesian dynamics. Only inter-particle interactions of hydrodynamic force and gravity are considered. The sedimentation of a simple cubic array of spheres is used to verify the computational results. The scaling and parallelism with OpenMP of the method are presented. Random suspension sedimentation is investigated with Mont Carlo simulation. The computational results are shown in good agreement with experimental fitting at the lower computational cost of O(N In N).
Resumo:
This project introduces an improvement of the vision capacity of the robot Robotino operating under ROS platform. A method for recognizing object class using binary features has been developed. The proposed method performs a binary classification of the descriptors of each training image to characterize the appearance of the object class. It presents the use of the binary descriptor based on the difference of gray intensity of the pixels in the image. It shows that binary features are suitable to represent object class in spite of the low resolution and the weak information concerning details of the object in the image. It also introduces the use of a boosting method (Adaboost) of feature selection al- lowing to eliminate redundancies and noise in order to improve the performance of the classifier. Finally, a kernel classifier SVM (Support Vector Machine) is trained with the available database and applied for predictions on new images. One possible future work is to establish a visual servo-control that is to say the reac- tion of the robot to the detection of the object.
Resumo:
Background: Primary distal renal tubular acidosis (dRTA) caused by mutations in the genes that codify for the H+ -ATPase pump subunits is a heterogeneous disease with a poor phenotype-genotype correlation. Up to now, large cohorts of dRTA Tunisian patients have not been analyzed, and molecular defects may differ from those described in other ethnicities. We aim to identify molecular defects present in the ATP6V1B1, ATP6V0A4 and SLC4A1 genes in a Tunisian cohort, according to the following algorithm: first, ATP6V1B1 gene analysis in dRTA patients with sensorineural hearing loss (SNHL) or unknown hearing status. Afterwards, ATP6V0A4 gene study in dRTA patients with normal hearing, and in those without any structural mutation in the ATP6V1B1 gene despite presenting SNHL. Finally, analysis of the SLC4A1 gene in those patients with a negative result for the previous studies. Methods: 25 children (19 boys) with dRTA from 20 families of Tunisian origin were studied. DNAs were extracted by the standard phenol/chloroform method. Molecular analysis was performed by PCR amplification and direct sequencing. Results: In the index cases, ATP6V1B1 gene screening resulted in a mutation detection rate of 81.25%, which increased up to 95% after ATP6V0A4 gene analysis. Three ATP6V1B1 mutations were observed: one frameshift mutation (c.1155dupC; p.Ile386fs), in exon 12; a G to C single nucleotide substitution, on the acceptor splicing site (c.175-1G > C; p.?) in intron 2, and one novel missense mutation (c. 1102G > A; p. Glu368Lys), in exon 11. We also report four mutations in the ATP6V0A4 gene: one single nucleotide deletion in exon 13 (c.1221delG; p. Met408Cysfs* 10); the nonsense c.16C > T; p.Arg6*, in exon 3; and the missense changes c.1739 T > C; p.Met580Thr, in exon 17 and c.2035G > T; p.Asp679Tyr, in exon 19. Conclusion: Molecular diagnosis of ATP6V1B1 and ATP6V0A4 genes was performed in a large Tunisian cohort with dRTA. We identified three different ATP6V1B1 and four different ATP6V0A4 mutations in 25 Tunisian children. One of them, c.1102G > A; p.Glu368Lys in the ATP6V1B1 gene, had not previously been described. Among deaf since childhood patients, 75% had the ATP6V1B1 gene c. 1155dupC mutation in homozygosis. Based on the results, we propose a new diagnostic strategy to facilitate the genetic testing in North Africans with dRTA and SNHL.
Resumo:
As a basic tool of modern biology, sequence alignment can provide us useful information in fold, function, and active site of protein. For many cases, the increased quality of sequence alignment means a better performance. The motivation of present work is to increase ability of the existing scoring scheme/algorithm by considering residue–residue correlations better. Based on a coarse-grained approach, the hydrophobic force between each pair of residues is written out from protein sequence. It results in the construction of an intramolecular hydrophobic force network that describes the whole residue–residue interactions of each protein molecule, and characterizes protein's biological properties in the hydrophobic aspect. A former work has suggested that such network can characterize the top weighted feature regarding hydrophobicity. Moreover, for each homologous protein of a family, the corresponding network shares some common and representative family characters that eventually govern the conservation of biological properties during protein evolution. In present work, we score such family representative characters of a protein by the deviation of its intramolecular hydrophobic force network from that of background. Such score can assist the existing scoring schemes/algorithms, and boost up the ability of multiple sequences alignment, e.g. achieving a prominent increase (50%) in searching the structurally alike residue segments at a low identity level. As the theoretical basis is different, the present scheme can assist most existing algorithms, and improve their efficiency remarkably.
Resumo:
To simulate fracture behaviors in concrete more realistically, a theoretical analysis on the potential question in the quasi-static method is presented, then a novel algorithm is proposed which takes into account the inertia effect due to unstable crack propagation and meanwhile requests much lower computational efforts than purely dynamic method. The inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, results may become questionable if a fracture process including unstable cracking is simulated by the quasi-static procedure excluding completely inertia effects. However, it requires much higher computational effort to simulate experiments with not very high loading rates by the dynamic method. In this investigation which can be taken as a natural continuation, the potential question of quasi-static method is analyzed based on the dynamic equations of motion. One solution to this question is the new algorithm mentioned above. Numerical examples are provided by the generalized beam (GB) lattice model to show both fracture processes under different loading rates and capability of the new algorithm.
Resumo:
Attempts to model any present or future power grid face a huge challenge because a power grid is a complex system, with feedback and multi-agent behaviors, integrated by generation, distribution, storage and consumption systems, using various control and automation computing systems to manage electricity flows. Our approach to modeling is to build upon an established model of the low voltage electricity network which is tested and proven, by extending it to a generalized energy model. But, in order to address the crucial issues of energy efficiency, additional processes like energy conversion and storage, and further energy carriers, such as gas, heat, etc., besides the traditional electrical one, must be considered. Therefore a more powerful model, provided with enhanced nodes or conversion points, able to deal with multidimensional flows, is being required. This article addresses the issue of modeling a local multi-carrier energy network. This problem can be considered as an extension of modeling a low voltage distribution network located at some urban or rural geographic area. But instead of using an external power flow analysis package to do the power flow calculations, as used in electric networks, in this work we integrate a multiagent algorithm to perform the task, in a concurrent way to the other simulation tasks, and not only for the electric fluid but also for a number of additional energy carriers. As the model is mainly focused in system operation, generation and load models are not developed.
Resumo:
215 p.
Resumo:
We have successfully extended our implicit hybrid finite element/volume (FE/FV) solver to flows involving two immiscible fluids. The solver is based on the segregated pressure correction or projection method on staggered unstructured hybrid meshes. An intermediate velocity field is first obtained by solving the momentum equations with the matrix-free implicit cell-centered FV method. The pressure Poisson equation is solved by the node-based Galerkin FE method for an auxiliary variable. The auxiliary variable is used to update the velocity field and the pressure field. The pressure field is carefully updated by taking into account the velocity divergence field. This updating strategy can be rigorously proven to be able to eliminate the unphysical pressure boundary layer and is crucial for the correct temporal convergence rate. Our current staggered-mesh scheme is distinct from other conventional ones in that we store the velocity components at cell centers and the auxiliary variable at vertices. The fluid interface is captured by solving an advection equation for the volume fraction of one of the fluids. The same matrix-free FV method, as the one used for momentum equations, is used to solve the advection equation. We will focus on the interface sharpening strategy to minimize the smearing of the interface over time. We have developed and implemented a global mass conservation algorithm that enforces the conservation of the mass for each fluid.
Resumo:
Recently, probability models on rankings have been proposed in the field of estimation of distribution algorithms in order to solve permutation-based combinatorial optimisation problems. Particularly, distance-based ranking models, such as Mallows and Generalized Mallows under the Kendall’s-t distance, have demonstrated their validity when solving this type of problems. Nevertheless, there are still many trends that deserve further study. In this paper, we extend the use of distance-based ranking models in the framework of EDAs by introducing new distance metrics such as Cayley and Ulam. In order to analyse the performance of the Mallows and Generalized Mallows EDAs under the Kendall, Cayley and Ulam distances, we run them on a benchmark of 120 instances from four well known permutation problems. The conducted experiments showed that there is not just one metric that performs the best in all the problems. However, the statistical test pointed out that Mallows-Ulam EDA is the most stable algorithm among the studied proposals.
Resumo:
Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.
The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.
The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.
The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.
Resumo:
This thesis presents a new approach for the numerical solution of three-dimensional problems in elastodynamics. The new methodology, which is based on a recently introduced Fourier continuation (FC) algorithm for the solution of Partial Differential Equations on the basis of accurate Fourier expansions of possibly non-periodic functions, enables fast, high-order solutions of the time-dependent elastic wave equation in a nearly dispersionless manner, and it requires use of CFL constraints that scale only linearly with spatial discretizations. A new FC operator is introduced to treat Neumann and traction boundary conditions, and a block-decomposed (sub-patch) overset strategy is presented for implementation of general, complex geometries in distributed-memory parallel computing environments. Our treatment of the elastic wave equation, which is formulated as a complex system of variable-coefficient PDEs that includes possibly heterogeneous and spatially varying material constants, represents the first fully-realized three-dimensional extension of FC-based solvers to date. Challenges for three-dimensional elastodynamics simulations such as treatment of corners and edges in three-dimensional geometries, the existence of variable coefficients arising from physical configurations and/or use of curvilinear coordinate systems and treatment of boundary conditions, are all addressed. The broad applicability of our new FC elasticity solver is demonstrated through application to realistic problems concerning seismic wave motion on three-dimensional topographies as well as applications to non-destructive evaluation where, for the first time, we present three-dimensional simulations for comparison to experimental studies of guided-wave scattering by through-thickness holes in thin plates.