878 resultados para Vision-based row tracking algorithm
Resumo:
随着智能机器人系统的发展,机器人的在线感知能力日益受到重视。障碍物检测能力是机器人在线感知能力的一个重要组成部分。因视觉传感器具有独特优势,基于视觉的障碍物检测方法成为目前关注的重点。 室外非结构化环境因结构复杂,机器人缺乏可有效利用的先验知识描述,导致众多障碍物检测系统在该环境中不能有效工作。本文采用全局-局部策略对场景进行由粗到精的分析,弥补室外非结构化环境先验知识不足的难题,提高机器人的在线感知能力。根据该策略,本文在基于视差图的障碍物检测系统框架中,引入视差投影图模块,提出了基于视差投影图的障碍物检测系统框架。该框架在视差投影图模块中全局分析场景视差分布水平,在立体匹配模块中局部分析场景前景目标的几何轮廓信息。依据该框架,针对实际应用中遇到的各种问题,提出了工作于室外非结构化环境的障碍物检测算法。该算法具有如下特点: 1、通过分析视差投影图的地面关联线信息,获得场景的视差分布水平。该信息一方面用来动态更改匹配算法的视差搜索范围,增强算法的实时性和鲁棒性;另一方面用来移除背景地表,简化障碍物分割过程; 2、采用双域滤波抑制噪声,获得清晰的边缘特征,降低立体匹配算法在深度不连续性区域的匹配难度; 3、借助逆向重投影的思想重采样扫描图像,在立体匹配前等效地实现了立体匹配过程中动态变更视差搜索范围的操作; 4、采用基于连通成分的扩散方法替代传统的SAD局部匹配算法,获得高质量的视差图,最终改善障碍物检测的精确性。 在室外非结构化环境中,本文对该算法进行了实验验证。通过设置不同的基线长度,验证了算法在不同的感知距离内的有效性。经实验证明,本算法在一定距离范围内能够有效的检测出障碍物。
Resumo:
在电机的设计中,常常需要通过优化设计得到合理的电机结构尺寸和参数.电机的设计问题实质上是一种带约束的复杂的非线性连续函数优化问题.要得到一个满意的优化结果不仅要求算法具有较高的精度,而且要有快的收敛速度.提出一种新的混合算法对永磁电机的尺寸和整体结构进行优化设计.将混沌算法和粒子群算法相结合,以微型永磁电机为例,对槽形等多个变量进行优化,结果证明了算法的有效性和快速性,适合于同类问题求解.
Resumo:
提出了一种基于信道估计的RS纠错编码改进算法,该算法可以自适应地根据外界条件和环境对传输信道的干扰变化实时地调节编码系统的数据冗余量。仿真与完整的分析结果证实了该改进算法有效地改善了RS编码算法的传输效率;并且通过实际应用表明:良好的性能,高容错性适应于该通信系统的多种传输信道,具有很强的实用性。
Resumo:
本文给出一种基于优化算法的机械手运动学逆解的方法 ,这种优化方法基于信赖域方法 ,,具有超线性的收敛速率 .这种方法不仅具有牛顿方法的快速收敛性 ,又具有理想的总体收敛性 .这种方法较 CCD& BFS有明显的优点 ,可以在一般的 PC机上实现实时求解 .在 P II40 0上仅需不到 10 ms就可以求得最优解 .
Resumo:
为解决受外力扰动而影响定点作业的问题,水下机器人需要具有悬停定位(Station keeping)的功能。悬停定位是指在存在外界扰动的情况下,机器人相对于作业目标仍然能够保持期望的位置和姿态。它具有两个特点:首先,是一种动力定位,即在状态感知系统的引导下,依靠自身动力抵抗外界扰动而保持期望位姿;其二,是一种局部定位,机器人相对作业目标的空间运动范围一般不会很大。因此,对悬停定位的研究需要重点解决两个问题,一个是状态感知问题;另一个是控制问题。 传统的声学定位难以满足精度和实时性的要求,视觉是一种实现水下机器人悬停定位的重要方法。本文以国家“十五”863计划重大专项“7000米载人潜水器”的悬停定位为应用背景,从摄像机标定、特定水下目标识别、视觉伺服和演示实验四个方面进行基于视觉的水下机器人悬停定位方法与应用研究。所开发的系统采用特定观察目标作为合作目标,应用基于模型的单目位姿估计方法获取摄像机与观察目标之间的相对位姿信息,实现了悬停定位中的状态感知;然后以视觉系统提供的位姿信息为反馈,构建视觉伺服控制器,实现了水下机器人的闭环控制。 摄像机标定与水下特定目标识别是实现单目位姿估计的前提,其精度直接影响位姿估计的精度,进而影响整个悬停定位系统的性能。通常摄像机标定是一个先建立成像模型,然后求解模型参数的过程。常用的简化成像模型无法精确描述3维成像空间与2维图像平面之间的关系,并且常规标定方法存在非线性方程优化求解困难的问题。因此,本文提出一种基于虚拟摄像机的无模型标定方法。该方法把摄像机的成像过程作为一个黑盒,通过光电测量方式直接建立3维成像空间与对应2维图像平面之间的映射关系。然后根据该映射关系定义一台虚拟理想摄像机,其透视模型参数可以根据需要任意设定,而不影响最终标定结果。虚拟摄像机的引入使得本方法的应用与常规方法同样方便。实验结果表明,该标定方法可以提高位姿估计精度,特别适用于无法用数学模型精确描述成像过程的系统。 针对系统中应用的特定观察目标,设计了水下图像处理和目标识别算法。对水下图像增强处理以后,应用基于自动阈值和区域生长相结合的方法进行图像分割;然后提取观察目标的图像特征,应用基于模型的目标识别方法实现了水下目标的识别和定位。 以单目位姿估计获得的位姿信息为反馈,构建水下机器人的闭环控制器,以实现悬停定位。这是一个典型的基于位置的视觉伺服问题。针对悬停定位的特点,设计了注视优先原则,在机器人运动过程中合理规划各自由度的控制,应用专家PID控制方法实现了水下机器人视觉伺服。 应用上述研究成果,以水下机器人实验平台为载体,在室内水池搭建了演示实验系统,在国内首次完成了基于视觉的水下机器人悬停定位演示实验。实验结果表明:在悬停定位起始阶段,以观察目标在摄像机视场内为前提,机器人能够在视觉伺服控制下跟踪观察目标,并且使观察目标始终保持在摄像机视场之中;能够运动到指定位置并且保持一个特定姿态,实现了机器人的定位定姿;在受到外界扰动(外力或水流)的情况下,仍然能够恢复到原来的位姿;即使在持续水流的冲击下,也能稳定地保持期望位姿。水下机器人悬停定位演示实验为今后基于视觉的悬停定位技术应用于实际作业打下了良好基础。
Resumo:
针对多品种批量生产类型,建立了调度约束的生产计划与调度集成优化模型。模型的目标函数是使总调整费用、库存费用及生产费用之和最小,约束函数包括库存平衡约束和生产能力约束,同时考虑了调度约束,即工序顺序约束和工件在单机上的加工能力约束,保证了计划可行性。该模型为两层混合整数规划模型,对其求解综合运用了遗传算法和启发式规则,提出了混合启发式求解算法。最后,针对某机床厂多品种批量生产类型车间进行了实例应用,对车间零件月份作业计划进行分解,得到各工段单元零件周作业计划,确定了零件各周生产批量与投产顺序。
Resumo:
At present, in order to image complex structures more accurately, the seismic migration methods has been developed from isotropic media to the anisotropic media. This dissertation develops a prestack time migration algorithm and application aspects for complex structures systematically. In transversely isotropic media with a vertical symmetry axis (VTI media), the dissertation starts from the theory that the prestack time migration is an approximation of the prestack depth migration, based on the one way wave equation and VTI time migration dispersion relation, by combining the stationary-phase theory gives a wave equation based VTI prestack time migration algorithm. Based on this algorithm, we can analytically obtain the travel time and amplitude expression in VTI media, as while conclude how the anisotropic parameter influence the time migration, and by analyzing the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, we can update the velocity model and estimate the anisotropic parameter model through the time migration. When anisotropic parameter is zero, this algorithm degenerates to the isotropic time migration algorithm naturally, so we can propose an isotopic processing procedure for imaging. This procedure may keep the main character of time migration such as high computational efficiency and velocity estimation through the migration, and, additionally, partially compensate the geometric divergence by adopting the deconvolution imaging condition of wave equation migration. Application of this algorithm to the complicated synthetic dataset and field data demonstrates the effectiveness of the approach. In the dissertation we also present an approach for estimating the velocity model and anisotropic parameter model. After analyzing the velocity and anisotropic parameter impaction on the time migration, and based on the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, through migration we can update the velocity model and estimate the anisotropic parameter model by combining the advantages of velocity analysis in isotropic media and anisotropic parameter estimation in VTI media. Testing on the synthetic and field data, demonstrates the method is effective and very steady. Massive synthetic dataset、2D sea dataset and 3D field datasets are used for VTI prestack time migration and compared to the stacked section after NMO and prestack isotropic time migration stacked section to demonstrate that VTI prestack time migration method in this paper can obtain better focusing and less positioning errors of complicated dip reflectors. When subsurface is more complex, primaries and multiples could not be separated in the Radon domain because they can no longer be described with simple functions (parabolic). We propose an attenuating multiple method in the image domain to resolve this problem. For a given velocity model,since time migration takes the complex structures wavefield propagation in to account, primaries and multiples have different offset-domain moveout discrepancies, then can be separated using techniques similar to the prior migration with Radon transform. Since every individual offset-domain common-reflection point gather incorporates complex 3D propagation effects, our method has the advantage of working with 3D data and complicated geology. Testing on synthetic and real data, we demonstrate the power of the method in discriminating between primaries and multiples after prestack time migration, and multiples can be attenuated in the image space considerably.
Resumo:
Sonic boom propagation in a quiet) stratified) lossy atmosphere is the subject of this dissertation. Two questions are considered in detail: (1) Does waveform freezing occur? (2) Are sonic booms shocks in steady state? Both assumptions have been invoked in the past to predict sonic boom waveforms at the ground. A very general form of the Burgers equation is derived and used as the model for the problem. The derivation begins with the basic conservation equations. The effects of nonlinearity) attenuation and dispersion due to multiple relaxations) viscosity) and heat conduction) geometrical spreading) and stratification of the medium are included. When the absorption and dispersion terms are neglected) an analytical solution is available. The analytical solution is used to answer the first question. Geometrical spreading and stratification of the medium are found to slow down the nonlinear distortion of finite-amplitude waves. In certain cases the distortion reaches an absolute limit) a phenomenon called waveform freezing. Judging by the maturity of the distortion mechanism, sonic booms generated by aircraft at 18 km altitude are not frozen when they reach the ground. On the other hand, judging by the approach of the waveform to its asymptotic shape, N waves generated by aircraft at 18 km altitude are frozen when they reach the ground. To answer the second question we solve the full Burgers equation and for this purpose develop a new computer code, THOR. The code is based on an algorithm by Lee and Hamilton (J. Acoust. Soc. Am. 97, 906-917, 1995) and has the novel feature that all its calculations are done in the time domain, including absorption and dispersion. Results from the code compare very well with analytical solutions. In a NASA exercise to compare sonic boom computer programs, THOR gave results that agree well with those of other participants and ran faster. We show that sonic booms are not steady state waves because they travel through a varying medium, suffer spreading, and fail to approximate step shocks closely enough. Although developed to predict sonic boom propagation, THOR can solve other problems for which the extended Burgers equation is a good propagation model.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in flexible congestion control mechanisms. Window-based congestion control schemes use increase rules to probe available bandwidth, and decrease rules to back off when congestion is detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and packet loss rate. In this paper, we propose a novel window-based congestion control algorithm called SIMD (Square-Increase/Multiplicative-Decrease). Contrary to previous memory-less controls, SIMD utilizes history information in its control rules. It uses multiplicative decrease but the increase in window size is in proportion to the square of the time elapsed since the detection of the last loss event. Thus, SIMD can efficiently probe available bandwidth. Nevertheless, SIMD is TCP-friendly as well as TCP-compatible under RED, and it has much better convergence behavior than TCP-friendly AIMD and binomial algorithms proposed recently.
Resumo:
A significant impediment to deployment of multicast services is the daunting technical complexity of developing, testing and validating congestion control protocols fit for wide-area deployment. Protocols such as pgmcc and TFMCC have recently made considerable progress on the single rate case, i.e. where one dynamic reception rate is maintained for all receivers in the session. However, these protocols have limited applicability, since scaling to session sizes beyond tens of participants necessitates the use of multiple rate protocols. Unfortunately, while existing multiple rate protocols exhibit better scalability, they are both less mature than single rate protocols and suffer from high complexity. We propose a new approach to multiple rate congestion control that leverages proven single rate congestion control methods by orchestrating an ensemble of independently controlled single rate sessions. We describe SMCC, a new multiple rate equation-based congestion control algorithm for layered multicast sessions that employs TFMCC as the primary underlying control mechanism for each layer. SMCC combines the benefits of TFMCC (smooth rate control, equation-based TCP friendliness) with the scalability and flexibility of multiple rates to provide a sound multiple rate multicast congestion control policy.
Resumo:
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable λ-terms. More interestingly, every finite-rank restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System F where the finite rank restriction for every finite rank at 3 and above has neither principal typings nor decidable type inference. This is also in contrast to earlier presentations of intersection types where the status of these properties is not known for the finite-rank restrictions at 3 and above.Furthermore, the notion of principal typings for our system involves only one operation, substitution, rather than several operations (not all substitution-based) as in earlier presentations of principality for intersection types (of unrestricted rank). A unification-based type inference algorithm is presented using a new form of unification, β-unification.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
BACKGROUND: Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. FINDINGS: Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides, amino acids, indels, and transposable elements, as well as tree files containing gene trees and species trees. Inferring an accurate phylogeny required generating: 1) A well annotated data set across species based on genome synteny; 2) Alignments with unaligned or incorrectly overaligned sequences filtered out; and 3) Diverse data sets, including genes and their inferred trees, indels, and transposable elements. Our total evidence nucleotide tree (TENT) data set (consisting of exons, introns, and UCEs) gave what we consider our most reliable species tree when using the concatenation-based ExaML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. CONCLUSIONS: The Avian Phylogenomics Project is the largest vertebrate phylogenomics project to date that we are aware of. The sequence, alignment, and tree data are expected to accelerate analyses in phylogenomics and other related areas.
Resumo:
Purpose – This paper aims to present an open-ended microwave curing system for microelectronics components and a numerical analysis framework for virtual testing and prototyping of the system, enabling design of physical prototypes to be optimized, expediting the development process. Design/methodology/approach – An open-ended microwave oven system able to enhance the cure process for thermosetting polymer materials utilised in microelectronics applications is presented. The system is designed to be mounted on a precision placement machine enabling curing of individual components on a circuit board. The design of the system allows the heating pattern and heating rate to be carefully controlled optimising cure rate and cure quality. A multi-physics analysis approach has been adopted to form a numerical model capable of capturing the complex coupling that exists between physical processes. Electromagnetic analysis has been performed using a Yee finite-difference time-domain scheme, while an unstructured finite volume method has been utilized to perform thermophysical analysis. The two solvers are coupled using a sampling-based cross-mapping algorithm. Findings – The numerical results obtained demonstrate that the numerical model is able to obtain solutions for distribution of temperature, rate of cure, degree of cure and thermally induced stresses within an idealised polymer load heated by the proposed microwave system. Research limitations/implications – The work is limited by the absence of experimentally derived material property data and comparative experimental results. However, the model demonstrates that the proposed microwave system would seem to be a feasible method of expediting the cure rate of polymer materials. Originality/value – The findings of this paper will help to provide an understanding of the behaviour of thermosetting polymer materials during microwave cure processing.
Resumo:
There is a need to provide rapid, sensitive, and often high throughput detection of pathogens in diagnostic virology. Viral gastroenteritis is a serious health issue often leading to hospitalization in the young, the immunocompromised and the elderly. The common causes of viral gastroenteritis include rotavirus, norovirus (genogroups I and II), astrovirus, and group F adenoviruses (serotypes 40 and 41). This article describes the work-up of two internally controlled multiplex, probe-based PCR assays and reports on the clinical validation over a 3-year period, March 2007 to February 2010. Multiplex assays were developed using a combination of TaqMan™ and minor groove binder (MGB™) hydrolysis probes. The assays were validated using a panel of 137 specimens, previously positive via a nested gel-based assay. The assays had improved sensitivity for adenovirus, rotavirus, and norovirus (97.3% vs. 86.1%, 100% vs. 87.8%, and 95.1% vs. 79.5%, respectively) and also more specific for targets adenovirus, rotavirus, and norovirus (99% vs. 95.2%, 100% vs. 93.6%, and 97.9% vs. 92.3%, respectively). For the specimens tested, both assays had equal sensitivity and specificity for astrovirus (100%). Overall the probe-based assays detected 16 more positive specimens than the nested gel-based assay. Post-introduction to the routine diagnostic service, a total of 9,846 specimens were processed with multiplex 1 and 2 (7,053 pediatric, 2,793 adult) over the 3-year study period. This clinically validated, probe-based multiplex testing algorithm allows highly sensitive and timely diagnosis of the four most prominent causes of viral gastroenteritis.