935 resultados para point-to-segment algorithm
Resumo:
This study compared the mechanisms of adaptation to stable and unstable dynamics from the perspective of changes in joint mechanics. Subjects were instructed to make point to point movements in force fields generated by a robotic manipulandum which interacted with the arm in either a stable or an unstable manner. After subjects adjusted to the initial disturbing effects of the force fields they were able to produce normal straight movements to the target. In the case of the stable interaction, subjects modified the joint torques in order to appropriately compensate for the force field. No change in joint torque or endpoint force was required or observed in the case of the unstable interaction. After adaptation, the endpoint stiffness of the arm was measured by applying displacements to the hand in eight different directions midway through the movements. This was compared to the stiffness measured similarly during movements in a null force field. After adaptation, the endpoint stiffness under both the stable and unstable dynamics was modified relative to the null field. Adaptation to unstable dynamics was achieved by selective modification of endpoint stiffness in the direction of the instability. To investigate whether the change in endpoint stiffness could be accounted for by change in joint torque or endpoint force, we estimated the change in stiffness on each trial based on the change in joint torque relative to the null field. For stable dynamics the change in endpoint stiffness was accurately predicted. However, for unstable dynamics the change in endpoint stiffness could not be reproduced. In fact, the predicted endpoint stiffness was similar to that in the null force field. Thus, the change in endpoint stiffness seen after adaptation to stable dynamics was directly related to changes in net joint torque necessary to compensate for the dynamics in contrast to adaptation to unstable dynamics, where a selective change in endpoint stiffness occurred without any modification of net joint torque.
Resumo:
Purpose: Advocates and critics of target-setting in the workplace seem unable to reach beyond their own well-entrenched battle lines. While the advocates of goal-directed behaviour point to what they see as demonstrable advantages, the critics of target-setting highlight equally demonstrable disadvantages. Indeed, the academic literature on this topic is currently mired in controversy, with neither side seemingly capable of envisaging a better way forward. This paper seeks to break the current deadlock and move thinking forward in this important aspect of performance measurement and management by outlining a new, more fruitful approach, based on both theory and practical experience. Design/methodology/approach: The topic was approached in three phases: assembling and reading key academic and other literature on the subject of target-setting and goal-directed behaviour, with a view to understanding, in depth, the arguments advanced by the advocates and critics of target-setting; comparing these published arguments with one's own experiential findings, in order to bring the essence of disagreement into much sharper focus; and then bringing to bear the academic and practical experience to identify the essential elements of a new, more fruitful approach offering all the benefits of goal-directed behaviour with none of the typical disadvantages of target-setting. Findings: The research led to three key findings: the advocates of goal-directed behaviour and critics of target-setting each make valid points, as seen from their own current perspectives; the likelihood of these two communities, left to themselves, ever reaching a new synthesis, seems vanishingly small (with leading thinkers in the goal-directed behaviour community already acknowledging this); and, between the three authors, it was discovered that their unusual combination of academic study and practical experience enabled them to see things differently. Hence, they would like to share their new thinking more widely. Research limitations/implications: The authors fully accept that their paper is informed by extensive practical experience and, as yet, there have been no opportunities to test their findings, conclusions and recommendations through rigorous academic research. However, they hope that the paper will move thinking forward in this arena, thereby informing future academic research. Practical implications: The authors hope that the practical implications of the paper will be significant, as it outlines a novel way for organisations to capture the benefits of goal-directed behaviour with none of the disadvantages typically associated with target-setting. Social implications: Given that increased efficiency and effectiveness in the management of organisations would be good for society, the authors think the paper has interesting social implications. Originality/value: Leading thinkers in the field of goal-directed behaviour, such as Locke and Latham, and leading critics of target-setting, such as Ordóñez et al. continue to argue with one another - much like, at the turn of the nineteenth century, proponents of the "wave theory of light" and proponents of the "particle theory of light" were similarly at loggerheads. Just as this furious scientific debate was ultimately resolved by Taylor's experiment, showing that light could behave both as a particle and wave at the same time, the authors believe that the paper demonstrates that goal-directed behaviour and target-setting can successfully co-exist. © Emerald Group Publishing Limited.
Resumo:
Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy. © 2011 the American Physiological Society.
Resumo:
Visual recognition problems often involve classification of myriads of pixels, across scales, to locate objects of interest in an image or to segment images according to object classes. The requirement for high speed and accuracy makes the problems very challenging and has motivated studies on efficient classification algorithms. A novel multi-classifier boosting algorithm is proposed to tackle the multimodal problems by simultaneously clustering samples and boosting classifiers in Section 2. The method is extended into an online version for object tracking in Section 3. Section 4 presents a tree-structured classifier, called Super tree, to further speed up the classification time of a standard boosting classifier. The proposed methods are demonstrated for object detection, tracking and segmentation tasks. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
1-D engine simulation models are widely used for the analysis and verification of air-path design concepts and prediction of the resulting engine transient response. The latter often requires closed loop control over the model to ensure operation within physical limits and tracking of reference signals. For this purpose, a particular implementation of Model Predictive Control (MPC) based on a corresponding Mean Value Engine Model (MVEM) is reported here. The MVEM is linearised on-line at each operating point to allow for the formulation of quadratic programming (QP) problems, which are solved as the part of the proposed MPC algorithm. The MPC output is used to control a 1-D engine model. The closed loop performance of such a system is benchmarked against the solution of a related optimal control problem (OCP). As an example this study is focused on the transient response of a light-duty car Diesel engine. For the cases examined the proposed controller implementation gives a more systematic procedure than other ad-hoc approaches that require considerable tuning effort. © 2012 IFAC.
Resumo:
In this paper, the codes of Pattern Informatics (PI) method put forward by Rundle et al. have been worked out according to their algorithm published, and the retrospective forecast of PI method to North China (28.0 degrees-42.0 degrees N, 108.0 degrees-125.0 degrees E) and to Southwest China (22.0 degrees-28.3 degrees N, 98.0 degrees-106.0 degrees E) has been tested. The results show that the hit rates in different regions show a great difference. In Southwest China, 32 earthquakes with M(L)5.0 or larger have occurred during the predicted time period 2000-2007, and 26 out of the 32 earthquakes occurred in or near the hot spots. In North China, the total number of M(L)5.0 or larger was 12 during the predicted time period 2000-2007, and only 3 out of the 12 earthquakes occurred in or near the hot spots. From our results, we hold that if the PI method could be applied to all kinds of regions, the parameters associated with time points and time windows should be chosen carefully to obtain the higher hit rate. We also found that the aftershocks in a strong earthquake sequence affect the PI results obviously. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
In this paper, the codes of Pattern Informatics (PI) method put forward by Rundle et al. have been worked out according to their algorithm published, and the retrospective forecast of PI method to North China (28.0 degrees-42.0 degrees N, 108.0 degrees-125.0 degrees E) and to Southwest China (22.0 degrees-28.3 degrees N, 98.0 degrees-106.0 degrees E) has been tested. The results show that the hit rates in different regions show a great difference. In Southwest China, 32 earthquakes with M(L)5.0 or larger have occurred during the predicted time period 2000-2007, and 26 out of the 32 earthquakes occurred in or near the hot spots. In North China, the total number of M(L)5.0 or larger was 12 during the predicted time period 2000-2007, and only 3 out of the 12 earthquakes occurred in or near the hot spots. From our results, we hold that if the PI method could be applied to all kinds of regions, the parameters associated with time points and time windows should be chosen carefully to obtain the higher hit rate. We also found that the aftershocks in a strong earthquake sequence affect the PI results obviously. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
目的解决B样条曲面重建问题中矩形拓扑网自动生成和参数化两大难点问题,提出一种基于逆向参数化的B样条曲面重建算法.方法首先构建基曲面,在基曲面上根据参数(u,v)进行采样,沿其法线方向进行数据的滤波和精简,求得参数(u,v)对应的精简点,然后对采样求取的精简点集进行B样条曲面拟合,该方法提供了B样条曲面重建的一个新思路.结果新算法突破了传统密集散乱点云数据的B样条曲面重建基本过程,采用与正向参数化相反的过程进行参数化,解决了B样条曲面重建问题中矩形拓扑网自动生成和参数化的难题;具体试验分析表明新算法不仅在参数化的同时完成了数据滤波和精简,而且在时间和迭代效率方面都具有优势.结论新算法避免了求取法线的迭代过程,并且可以较容易的实现矩形拓扑网的自动生成,新算法在自主开发的智能测量建模加工一体化装备中得到了应用验证.
Resumo:
首先利用模糊C-均值聚类算法在多特征形成的特征空间上对图像进行区域分割,并在此基础上对区域进行多尺度小波分解;然后利用柯西函数构造区域的模糊相似度,应用模糊相似度及区域信息量构造加权因子,从而得到融合图像的小波系数;最后利用小波逆变换得到融合图像·采用均方根误差、峰值信噪比、熵、交叉熵和互信息5种准则评价融合算法的性能·实验结果表明,文中方法具有良好的融合特性·
Resumo:
基于Stewart平台的六维力传感器具有结构紧凑、刚度大、量程宽等特点,它在工业机器人、空间站对接等领域具有广泛的应用前景。好的标定方法是正确使用传感器的基础。由于基于Stewart平台的六维力传感器是一个复杂的非线性系统,所以采用常规的线性标定方法必将带来较大的标定误差从而影响其使用性能。标定的实质是,由测量值空间到理论值空间的映射函数的确定过程。由函数逼近理论可知,当只在已知点集上给出函数值时,可用多项式或分段多项式等较简单函数逼近待定函数。基于上述思想,本文将整个测量空间划分为若干连续的子测量空间,再对每个子空间进行线性标定,从而提高了整个测量系统的标定精度。实验分析结果表明了该标定方法有效。
Resumo:
By seismic tomography, interesting results have been achieved not only in the research of the geosphere with a large scale but also in the exploration of resources and projects with a small scale since 80'. Compared with traditional inversion methods, seismic tomography can offer more and detailed information about subsurface and has been being paid attention by more and more geophysicists. Since inversion based on forward modeling, we have studied and improved the methods to calculate seismic traveltimes and raypaths in isotropic and anisotropic media, and applied the improved forward methods to traveltime tomography. There are three main kinds of methods to calculate seismic traveltime field and its ray path distribution, which are ray-tracing theory, eikonal equation by the finite-difference and minimum traveltime tree algorithm. In ray tracing, five methods are introduced in the paper, including analytic ray tracing, ray shooting, ray bending, grid ray tracing and rectangle grid ray perturbation with three points. Finite-difference solution of eikonal equation is very efficient in calculation of seismic first-break, but is awkward in calculation of reflection traveltimes. We have put forward a idea to calculate traveltimes of reflected waves using a combining way of eikonal equation method and other one in order to improve its capability of dealing with reflection waves. The minimum traveltime tree algorithm has been studied with emphases. Three improved algorithms are put forward on the basis of basic algorithm of the minimum traveltime tree. The first improved algorithm is called raypath tracing backward minimum traveltime algorithm, in which not only wavelets from the current source but also wavelets from upper source points are all calculated. The algorithm can obviously improve the speed of calculating traveltimes and raypaths in layered or blocked homogeneous media and keep good accuracy. The second improved algorithm is raypath key point minimum traveltime algorithm in which traveltimes and raypaths are calculated with a view of key points of raypaths (key points of raypths mean the pivotal points which determine raypaths). The raypath key point method is developed on the basis of the first improved algorithm, and has better applicability. For example, it is very efficient even for inhomogeneous media. Another improved algorithm, double grid minimum traveltime tree algorithm, bases upon raypath key point scheme, in which a model is divided with two kinds of grids so that the unnecessary calculation can be left out. Violent undulation of curved interface often results in the phenomenon that there are no reflection points on some parts of interfaces where there should be. One efficacious scheme that curved interfaces are divided into segments, and these segments are treated respectively is presented to solve the problem. In addition, the approximation to interfaces with discrete grids leads to large errors in calculation of traveltimes and raypaths. Noting the point, we have thought a new method to remove the negative effect of mesh and to improve calculation accuracy by correcting the traveltimes with a little of additional calculation, and obtained better results.
Resumo:
Describing visually space-time properties of geological phenomena consists of one of the most important parts in geology research. Such visual images are of usually helpful for analyzing geological phenomena and for discovering the regulations behind geological phenomena. This report studies mainly three application problems of scientific visualization in geology: (Dvisualizing geological body A new geometric modeling technique with trimmed surface patches has been eveloped to visualize geological body. Constructional surfaces are represented as trimmed surfaces and a constructional solid is represented by the upper and lower surface composed of trimmed surface patches from constructional surfaces. The technique can completely and definitely represent the structure of geological body. It has been applied in visualization for the coal deposit in Huolinhe, the aquifer thermal energy storage in Tianjin and the structure of meteorite impact in Cangshan et al. (2)visualizing geological space field Efficient visualization methods have been discussed. Marching-Cube algorithm used has been improved and is used to extract iso~surface from 3D data set, iso-line from 2D data set and iso-point from ID data set. The improved method has been used to visualize distribution and evolution of the abnormal pressures in Zhungaer Basin. (3)visualizing porous space a novel way was proposed to define distance from any point to a convex set. Thus a convex set skeleton-based implicit surface modeling technique is developed and used to construct a simplified porous space model. A Buoyancy Percolation numerical simulation platform has been developed to simulate the process of migration of oil in the porous media saturated with water.
Resumo:
Durbin, J., Urquhart, C. & Yeoman, A. (2003). Evaluation of resources to support production of high quality health information for patients and the public. Final report for NHS Research Outputs Programme. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: Department of Health
Resumo:
IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 5, pp. 1338-1343, 2003.
Resumo:
C.M. Onyango, J.A. Marchant and R. Zwiggelaar, 'Modelling uncertainty in agricultural image analysis', Computers and Electronics in Agriculture 17 (3), 295-305 (1997)