914 resultados para MatLab Simulink
Resumo:
利用高精度三坐标测量机扫描功能,充分获得焊接板材端面线性三维数据坐标值,采用基于最小二乘法而建立的理想直线数学模型来评定直线度误差,并与实际采用塞尺法进行对比,结果发现采用三坐标测量更能详尽地反应整个焊接边缘直线度真实情况,节省大量的数据处理时间,且数据更加准确可靠,并在实际激光焊接中获得良好激光焊缝,同时直观地给出了焊接板材边缘线性特征,为剪板机刀具调整提供了方向与位置。
Resumo:
针对一种四自由度并联机构进行了运动学分析 ,在此基础上以摄动法建立了其误差模型 ,明确了各误差源对末端位姿的影响 .在对并联机构的标定技术进行简单介绍后 ,说明了对该机构误差模型中的机构参数进行标定的两种方法 ,介绍了标定装置、标定算法及标定过程 .最后采用Matlab对基于逆解的标定方法进行了仿真 ,并对仿真结果进行了分析 .
Resumo:
视觉检测技术是随着计算机视觉技术和光电技术的飞速发展,而出现的一种新的检测技术。检测被测目标时,把图像当作检测和传递信息的手段或载体,从图像中提取有用的信号,它是以现代光学为基础,融光电子学、计算机图像学、信息处理、计算机视觉等科学技术为一体的现代检测技术。现代激光自动化焊接技术是由激光、计算机、机器人、数控和精密机床等相结合的综合高新技术,此项技术已成为工业生产自动化的关键技术,拥有普通加工技术所不能比拟的优势。为了克服机器人焊接过程中各种不确定因素对焊接质量的影响,提高机器人作业的智能化水平和工作可靠性,要求焊接机器人系统不仅能实现空间焊缝的实时跟踪,而且还能实现焊接参数的在线调整和焊缝质量的实时控制,即焊接机器人焊接过程的自主化和智能化。本文的研究依托于中国科学院知识创新工程方向性项目“全自动激光拼焊成套装备生产线”,旨在探索立体视觉检测系统的实现及其在激光拼焊工程中的应用的问题。从理论和实践两个方面,对其中的若干关键技术,如视觉检测系统创新设计、数学模型、量化误差、摄像机标定、结构光条纹中心线提取、焊前特征检测、溶池边缘提取、焊后缺陷图像匹配算法、三维重建和表面孔的视觉定位等进行了研究。主要研究成果如下: 1.提出了一种可以用于焊前跟踪,焊后检测,以及焊接过程中对激光溶池进行监测的多功能激光视觉检测装置。推导了检测系统在不考虑像平面安装倾斜角度时和考虑像平面安装倾斜角度时检测点坐标的计算公式以及量化误差公式,分别针对由于数模转换量化误差、安装角度倾斜误差、安装高度误差三个方面引起的量化误差,分析其关于行,列,以及不同倾斜角度的影响分布规律。并对于各种情况进行了仿真,对于各种误差分布特征进行了分析,提出了检测奇异点的情况和数学模型的局限性。以上工作为实现焊缝三维信息的高精度提取奠定了基础。 2.对于摄像机的标定技术进行了研究,结合工程实际,利用zhang的标定法和matlab标定工具包,对于摄像机进行了标定;针对检测相机视场较小,标定采集范围不易调整和相机的畸变主要发生在视场边缘等特征,在保证要求的精度范围之内,提出一种基于标定靶的标定方法,实验证明该方法的标定与测量精度能够满足工程需要。 3.研究了现有的条纹中心提取算法过程,提出了基于OTSU阈值的多次高斯拟合平均法和基于OTSU阈值的质心平均法计算激光条纹中心坐标。该法对条纹的噪声,散斑和被测工件表面漫反射有很强的抵制作用,因此具有很强的鲁棒性。实验表明,与传统方法相比,具有更高的提取精度。同时为了适应激光条纹被工件表面调制后发生的角度变化,以及硬件安装带来的激光线型条纹倾斜,提出了一种自适应方向模板法,可以解决特殊倾斜角度时的激光条纹中心线提取问题。三个仿真试验验证了方法的可行性。 4.提出了一套在线实时进行焊前检测的图形处理算法,可以实现焊缝宽度,焊前错配和焊缝中心位置检测。通过工程实验提取了各指标的检测结果,并验证了算法的正确性。 5.提出了一种基于数学形态学的激光拼焊溶池边缘检测算法,对于激光拼焊中的溶池图像进行边缘提取,基于真实图像进行了实验研究表明,提取边缘效果可以达到单像素。 6.对焊后表面形貌检测的图像实时处理算法进行了研究。提出了一套在线实时进行焊后焊缝表面缺陷检测的图形处理算法,可以进行焊缝宽度,错配,凹度,凸度,咬边,焊接倾角,过高七种表面焊接缺陷的匹配识别;对于整个焊缝的表面形貌进行三维重建。通过等厚板焊接和不等厚板焊接两种试验,验证了算法的合理性和鲁棒性。提出了一种基于randon变换的错配和兴趣区域快速检测算法。对于表面孔的检测算法进行了探讨,主要针对孔的检测中的噪声和表面反射,研究了腐蚀膨胀对于表面孔定位和大小的检测影响。 7.研究了以普通6R机器人进行焊缝视觉检测的工作视野,即焊缝视觉检测空间,提出了一种生成焊缝视觉检测空间的解算法;以此检测视野为依托,探索了在视觉检测视野中进行焊缝视觉检测的初始位置规划问题,提出了一种初始位置规划算法;仿真结果证明了算法的正确性。
Resumo:
直角坐标机器人作为一种常用的工业机器人广泛应用于现代工业生产线中,其运动学和动力学特性关系整个生产线性能发挥。虚拟样机技术在复杂机械系统仿真中的成功应用不仅可以提高仿真精度,而且还可以缩短产品设计周期,对于工程实际具有重要的应用价值。通过虚拟样机技术,工程师可以通过机械系统运动仿真,在产品设计阶段发现产品设计中的潜在问题,并快速进行修改,减少了对于物理样机的依赖,这样不仅可以节省成本,缩短产品开发周期,而且还可以提高产品性能,增强产品竞争力。本文结合中科院沈阳自动化研究所现代装备研究设计中心的大型项目“激光拼焊生产线”,对生产线中从国外引进的上下料机器人进行研究分析,设计出一台可以满足“激光拼焊生产线”工业要求的上下料直角坐标机器人,来实现高性能上下料机器人的国产化。以设计出的上下料直角坐标机器人为研究对象,采用虚拟样机技术,以有限元法和多体系统动力学为理论基础,进行上下料直角坐标机器人机构的运动学和动力学分析仿真研究。其中研究了三维实体建模方法以及D-H方法,探讨了利用CAD/CAE软件SOLIDWORKS, ADAMS和ANSYS进行协同虚拟样机的建模、装配、数据共享、有限元静力分析以及刚柔耦合多体动力学仿真技术,研究成果对实际工程的运用具有一定的指导作用。本文主要的研究工作及其成果如下: 1根据工业生产线要求,确定高性能上下料直角坐标机器人总体机构设计方案,设计各个直线运动单元。 2利用CAD软件SOLIDWORKS对上下料直角坐标机器人进行3维实体建模,然后利用D-H方法,建立其运动学方程,最后利用MATLAB工具对运动学进行仿真。 3以有限元方法为基础,利用有限元分析软件ANSYS对上下料直角坐标机器人核心部件Y轴横梁静力分析,最后进行横梁结构优化。 4以多体系统动力学为基础,利用有限元软件ANSYS生成刚柔耦合多体动力学仿真所需的模态中性文件(MNF),同时结合运用ANSYS和ADAMS建立了上下料直角坐标机器人刚柔性耦合多体模型,并进行了动力学分析。
Resumo:
文章介绍了自组织神经网络在故障诊断方面的应用原理,针对自组织神经网络实现问题提出了一种通过在LabVIEW调用MATLAB应用程序实现自组织神经网络的方法。并通过轴承故障诊断的实例,证明了这种方法的有效性。
Resumo:
在香烟包装过程中会出现不合格现象,通过对缺陷的分析,设计了一种用于香烟包装质量检测的改进快速图像匹配算法。文中通过优化相关系数的计算、引入自适应遗传算法、对感兴趣区域匹配检测来提高算法性能,并将算法在MATLAB中编程实现。仿真实验结果表明该图像匹配算法计算速度快、检测精度高,满足香烟包装质量检测的需要。
Resumo:
在香烟包装过程中会出现包装不合格现象,通过对包装不合格的烟包图像特征进行分析,设计了香烟小包装的正面和侧面图像检测的快速算法,尤其对图像的快速配准进行了研究.文中将序贯相似检测算法、模板匹配及能量识别方法引入检测过程,对算法的流程进行了详细介绍,并将算法在MATLAB中编程实现.MATLAB上的仿真结果表明该检测算法合理可行,能够满足香烟小包装在线检测的需要。
Resumo:
车型自动识别分类在不停车收费系统中起着关键的作用,决定了不停车收费系统的可靠性和智能化程度,对提高公路交通系统的管理水平和车辆通行速度具有重要的意义。 本文对现有车型自动识别分类方法进行了分析比较,在此基础上,对采用雷达微波进行车型识别进行了探索和研究,雷达微波车型识别技术与车载电子标签有机结合起来,起到车型二次识别的作用,有效防止各种舞弊行为,控制收费损失。 本文通过MATLAB产生仿真的雷达微波信号,信号中包含了车型的特征信息。再采用小波变换的方法消除噪声,由于车型大小与信号经过小波变换后得到的各层能量分布有关,所以提取其能量分布作为分类识别的特征矢量。设计了BP神经网络的分类器,车型的能量分布特征由车型分类器进行分类,最终得到车辆的类型。 本文在对所设计的神经网络分类器进行训练的时候,对样本采用了改进的模糊C均值算法进行聚类分析,有效地避免了样本集不理想情况下对各类中心隶属度过小的情况,用隶属度作为网络输出训练,使网络容错性更强,更加符合实际分类情况,三个网络分别训练,最后综合判断,提高了分类质量。 本文首先介绍了已有车型自动识别的方法,分析讨论了存在的弊端,然后提出采用雷达微波进行识别的方法,详细介绍了对回波信号进行处理所用到的算法,分析比较各种算法,选择合适的算法用于信号的处理,最后介绍了车型识别硬件仿真平台及软件实现。
Resumo:
The conventional microtremor survey is based on the single point of exploration, which includes collecting field data,estimating the phase velocity, investing the dispersion curve and obtaining the S–wave velocity structure. In the case of large-scale exploration, and when making the two-dimensional velocity section, the inversion is quite time-consuming, laborious and its precision depends on the subjective interpretation, which makes the results differently from person to person. In fact,we do not need the S-wave velocity values but only need the relative variation of velocity. For these reasons, this paper is desired to calculate the apparent S-wave velocity (Vx) to replace the S-wave velocity inversion and to obtain the relative variation of the S-wave velocity. Using this method, we can decrease the analysts’ effect, shorten the data processing time and improve work efficiency. The apparent S-wave velocity is a variable of the surface wave property, which can clearly reflect the downcast columns, mined-out areas and other unusual geological bodies. In this paper, Matlab is used to establish the three-dimensional data volume of the apparent S-wave velocity, from which we can get any apparent S-wave velocity section we need. Through the application case, the designed method is proved to be reliable and effective. The downcast columns, mined-out areas and other unusual geological bodies can be clearly showed in the apparent S-wave velocity section. And from the contour of the apparent S-wave velocity, the interface shape of the major target layers can be controlled basically.
Resumo:
Methods for fusing two computer vision methods are discussed and several example algorithms are presented to illustrate the variational method of fusing algorithms. The example algorithms seek to determine planet topography given two images taken from two different locations with two different lighting conditions. The algorithms each employ assingle cost function that combines the computer vision methods of shape-from-shading and stereo in different ways. The algorithms are closely coupled and take into account all the constraints of the photo-topography problem. The algorithms are run on four synthetic test image sets of varying difficulty.
Resumo:
The thesis initially gives an overview of the wave industry and the current state of some of the leading technologies as well as the energy storage systems that are inherently part of the power take-off mechanism. The benefits of electrical energy storage systems for wave energy converters are then outlined as well as the key parameters required from them. The options for storage systems are investigated and the reasons for examining supercapacitors and lithium-ion batteries in more detail are shown. The thesis then focusses on a particular type of offshore wave energy converter in its analysis, the backward bent duct buoy employing a Wells turbine. Variable speed strategies from the research literature which make use of the energy stored in the turbine inertia are examined for this system, and based on this analysis an appropriate scheme is selected. A supercapacitor power smoothing approach is presented in conjunction with the variable speed strategy. As long component lifetime is a requirement for offshore wave energy converters, a computer-controlled test rig has been built to validate supercapacitor lifetimes to manufacturer’s specifications. The test rig is also utilised to determine the effect of temperature on supercapacitors, and determine application lifetime. Cycle testing is carried out on individual supercapacitors at room temperature, and also at rated temperature utilising a thermal chamber and equipment programmed through the general purpose interface bus by Matlab. Application testing is carried out using time-compressed scaled-power profiles from the model to allow a comparison of lifetime degradation. Further applications of supercapacitors in offshore wave energy converters are then explored. These include start-up of the non-self-starting Wells turbine, and low-voltage ride-through examined to the limits specified in the Irish grid code for wind turbines. These applications are investigated with a more complete model of the system that includes a detailed back-to-back converter coupling a permanent magnet synchronous generator to the grid. Supercapacitors have been utilised in combination with battery systems for many applications to aid with peak power requirements and have been shown to improve the performance of these energy storage systems. The design, implementation, and construction of coupling a 5 kW h lithium-ion battery to a microgrid are described. The high voltage battery employed a continuous power rating of 10 kW and was designed for the future EV market with a controller area network interface. This build gives a general insight to some of the engineering, planning, safety, and cost requirements of implementing a high power energy storage system near or on an offshore device for interface to a microgrid or grid.
Resumo:
The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.
Resumo:
This research project uses field measurements to investigate the cooling of a triple-junction, photovoltaic cell under natural convection when subjected to various amounts of insolation. The team built an experimental apparatus consisting of a mirror and Fresnel lens to concentrate light onto a triple-junction photovoltaic cell, mounted vertically on a copper heat sink. Measurements were taken year-round to provide a wide range of ambient conditions. A surface was then generated, in MATLAB, using Sparrow’s model for natural convection on a vertical plate under constant heat flux. This surface can be used to find the expected operating temperature of a cell at any location, given the ambient temperature and insolation. This research is an important contribution to the industry because it utilizes field data that represents how a cell would react under normal operation. It also extends the use of a well-known model from a one-sun environment to a multi-sun one.