944 resultados para Hamming ball
Resumo:
本文提出了一种机电一体化驱动器的概念,这种驱动器能够将电机的旋转运动转换为直线运动,并在不外加力和力矩传感器的基础上利用运动转换副上产生的微应变来估算电机的输出力矩和驱动器末端的牵引力信息。利用高精度、高灵敏度并带有温度和噪声补偿电路的应变片来测量微应变,以取得稳定而可靠的测量结果。应变片直接贴在驱动器的主体结构表面可以最小化尺寸,而且可以实现机电一体化而不是机械-电子集成系统。仿真实验表明,这种方法和传统的电流估算法相比,可以减少由于噪音和波动带来的电流信号误差,得到更加精确的信息,并且(与加入力或者力矩传感器相比)能够有效缩小外型尺寸,同时去除不必要的机械与电子接口,因此增加了系统的鲁棒性。
Resumo:
本文提出一种双向联想记忆神经网络的按‘位’加权编码策略,并给出了求取权值的速推算法.它将Kosko双向联想记忆神经网络按海明距离进行模式匹配的原则,修正为按加权海明距离进行模式匹配,从而可以使得对不满足连续性的所谓“病态结构”的一类样本模式集,同样具有良好的联想能力.对二值图象模式存贮、联想的计算机模拟实验表明,此方法具有优良的性能和实用价值。
Resumo:
基于一般STEWART机构研制的并联机器人机床是新一代智能化金属切削加工机床.然而,机床的运动学位置正、逆解呈强非线性,求解困难.出于机床精度的需要,本研究的模型样机在结构上采用了滚珠丝杠传动,因此又带来了关节运动耦合,导致机床运动学位置正、逆解求解更加复杂.利用运动学等效的原则,引入整机等效串联机构及分支等效串联机构,以等效广义坐标为中间变量建立机床运动学正、逆解求解迭代算法.仿真与控制实验表明,该算法具有收索速度快便于实际应用等特点。
Resumo:
本文叙述一种改进型HAMMING网在印刷汉字文本识别实用系统中作为粗分类的应用.给出了以3755印刷汉字为多模式分类对象的神经网络分类器的结构及其相应的算法.该方法在微型机上用软件仿真得以实现.取得令人满意的结果.
Resumo:
The real earth is far away from an ideal elastic ball. The movement of structures or fluid and scattering of thin-layer would inevitably affect seismic wave propagation, which is demonstrated mainly as energy nongeometrical attenuation. Today, most of theoretical researches and applications take the assumption that all media studied are fully elastic. Ignoring the viscoelastic property would, in some circumstances, lead to amplitude and phase distortion, which will indirectly affect extraction of traveltime and waveform we use in imaging and inversion. In order to investigate the response of seismic wave propagation and improve the imaging and inversion quality in complex media, we need not only consider into attenuation of the real media but also implement it by means of efficient numerical methods and imaging techniques. As for numerical modeling, most widely used methods, such as finite difference, finite element and pseudospectral algorithms, have difficulty in dealing with problem of simultaneously improving accuracy and efficiency in computation. To partially overcome this difficulty, this paper devises a matrix differentiator method and an optimal convolutional differentiator method based on staggered-grid Fourier pseudospectral differentiation, and a staggered-grid optimal Shannon singular kernel convolutional differentiator by function distribution theory, which then are used to study seismic wave propagation in viscoelastic media. Results through comparisons and accuracy analysis demonstrate that optimal convolutional differentiator methods can solve well the incompatibility between accuracy and efficiency, and are almost twice more accurate than the same-length finite difference. They can efficiently reduce dispersion and provide high-precision waveform data. On the basis of frequency-domain wavefield modeling, we discuss how to directly solve linear equations and point out that when compared to the time-domain methods, frequency-domain methods would be more convenient to handle the multi-source problem and be much easier to incorporate medium attenuation. We also prove the equivalence of the time- and frequency-domain methods by using numerical tests when assumptions with non-relaxation modulus and quality factor are made, and analyze the reason that causes waveform difference. In frequency-domain waveform inversion, experiments have been conducted with transmission, crosshole and reflection data. By using the relation between media scales and characteristic frequencies, we analyze the capacity of the frequency-domain sequential inversion method in anti-noising and dealing with non-uniqueness of nonlinear optimization. In crosshole experiments, we find the main sources of inversion error and figure out how incorrect quality factor would affect inverted results. When dealing with surface reflection data, several frequencies have been chosen with optimal frequency selection strategy, with which we use to carry out sequential and simultaneous inversions to verify how important low frequency data are to the inverted results and the functionality of simultaneous inversion in anti-noising. Finally, I come with some conclusions about the whole work I have done in this dissertation and discuss detailly the existing and would-be problems in it. I also point out the possible directions and theories we should go and deepen, which, to some extent, would provide a helpful reference to researchers who are interested in seismic wave propagation and imaging in complex media.
Resumo:
震旦-寒武交变期是地史上一个重大转折期,从隐生宙向显生宙过渡,海、陆、空发生了显著不同的变化,是一个具有特殊意义的过渡时期。中国扬子地区广泛发育的海相沉积层序有效地记录了震旦一寒武交变期重要的地质事件,因此为研究该时期大气圈、生物圈、岩石圈和水圈的相互联系提供了独一无二的场所。在前寒武纪一寒武纪地质研究中,由于缺少标志性生物化石,行之有效的生物地层学方法在上前寒武系的划分和对比中受到很大限制,沉积有机,质干酪根和相应碳酸盐的稳定碳同位素分析已经成为全球对比和划分的一个极为重要的的研究方法。尽管一些学者对扬子地台进行了多年的地球化学研究,使用碳酸盐和与之共生的有机质碳同位素组成对广泛的扬子地台变化的沉积环境进行研究还很欠缺、对分析和探讨该时期的生命演化过程和环境变化的关系研究方面还不足。本研究是以中国扬子地区为研究范围,用沉积碳酸盐和与之共生的有机质碳同位素组成对广泛的扬子地台变化的沉积环境(台地相、盆地相、过渡地带)进行分析,初步建立了一个地球化学模型,用于解释震旦一寒武交变期沉积地球化学记录,分析和探讨区域扬子地台碳循环和环境变化与地质事件之间的内在和外在联系:(1)南沱期:有机碳同位素组成(瓮安剖面平均值在-35.0‰左右)表现为较强负异常。地球被称为雪球(Snow-ball)或部分冰雪覆盖球体(Slush-ball),水体滞留和水动力不强,原始产率较低,物源以深源为主;生物不发育,主要是细菌和低等的真核细胞生物;空气和海水的气体通过冰裂缝进行交换,促进了碳酸盐的溶解;有机碳循环主要通过厌氧过程,比如细菌硫酸盐还原作用进行。(2)陡山沱期和灯影期:南沱晚期一陡山沱早期,海水的碳酸盐碳同位素组成短期仍然较负(瓮安剖面的δl3Ccarb-avg为-2.8‰ 松桃剖面1、2的δ13Ccarb-avg分别是-3.5%0和-8.6%。);有机质的碳同位素组成总体呈现正漂移(瓮安护3Corg-avg:-26.3‰;南明剖面的δ13Corg-avg高达-26.7‰),这正是全球分布的“帽”碳酸盐出现的时期:接近地表的火山去气作用释放出较之现代350倍的CO2,导致地球迅速变暖,冰雪融化,大陆风化作用加强,海平面上升,“雪球,,转化为“温室,大气中大量的CO2快速转化为碳酸钙沉入海水中。全球可能处于一个异常高的海洋沉积速率时期。随后陡山沱组的护3Ccarb值显著上升,暗示了这一时期生物作用加强,有机碳埋藏速率明显提高,有机碳和还原硫埋藏的增加,导致上层海水345的富集,硫同位素组成较高。热液作用和上升洋流作用促成了瓮安磷矿的形成和瓮安生物群的繁盛。在南沱冰期后的陡山沱期和灯影期,高的别3c的出现主要是由于进行光合作用的海洋植物群体产率的迅速增加、海洋沉积速率的升高、海洋深部水柱中缺氧层的存在、热液活动、上升洋流作用、海水分层结构引起的,而短期同位素组成的负漂移和生物产率的变化则可能是区域事件所造成。明显的碳同位素组成负漂移出现在前寒武/寒武界线附近,这反映了碳短期变化的翻覆,与震旦纪末生物绝灭、环境变化的地质事件相符合。(3)牛蹄塘期:本研究结果发现,在牛蹄塘组/郭家坝组底部黑色岩系中,有机碳、无机碳、有机硫、黄铁矿硫同位素组成值相对较低,有机碳和黄铁矿含量相对较高。δ3Corg-avg和δ3Ccarb-avg分别是-33.9士0.7‰和-2.5=0.4‰;TOC>0.5;黄铁矿平均含量为0.96%间变化;黄铁矿(δ4ScRs)和有机硫同位素组成(δ34SOBS)平均值分别为0.3士7.5‰和3.4土7.1‰。由于牛蹄塘初期的环境变化频繁和不稳定,扬子区处于一段特殊的时期,碳、硫同位素组成延续上震旦统的负漂移现象,海侵事件、还原环境、缺氧事件、裂谷作用火山喷发、、气液喷溢、热水作用等造成海水相刘较深,有机碳埋藏量增大,多金属富集成矿。在牛蹄塘中晚期碳同位素组成趋于稳定,碳同位素组成重化,有机碳和黄铁矿含量降低:碳酸盐和有机质的碳同位素组成平均值分别是0.31±1.0‰和-31.41.3%。(沙滩),呈现稳定的正漂移;TOC平均值是0.8%;沙滩剖面郭家坝组中上部样品的黄铁矿平均含量0.5%;δ34SCRS-avg和δ34SOBS-avg为17.8士2.0‰和16.9±1.8‰。在牛蹄塘中期,随着大气圈和水圈中CO2含量降低、环境稳定,促使寒武纪生物繁盛,可能与增加的寒武系生物产量和微生物作用有关。对牛蹄塘期的环境情况有如下分析:随着全球变暖、海平面迅速上升,上升洋流活跃,由于分层海水的存在,海水在氧化带附近及其上部具有较高的有机物生成率,使寒武纪初期成为形成植物繁衍和带壳动物爆发的重要时期。碳同位素组成由震旦一寒武交变期的不稳定负漂移变化到稳定正漂移,这与世界其它地区的变化相一致。下寒武统富有机碳和黄铁矿的黑色页岩沉积,暗示了早寒武世缺氧环境的存在。(4)凯里期:早中寒武世交变期有机碳和无机碳同位素组成规律的变化,出现在扬子地台台江剖面上。有机质埋藏的变化,与生物从下寒武到中寒武统的变化相联系。碳酸盐和有机碳同位素组成的变化规律,反映了震旦一寒武交变期沉积环境的多变和震旦一寒武交变期碳循环的波动,这与变化的古环境背景、环境条件和生物演化的变化相互联系。碳酸盐碳同位素组成反映了海水最初的同位素信息;海底热液作用和上升一洋流作用可能成为影响碳同位素组成的重要因素。然而,各一地区在同一时期存在相似性,也有很大的不同,所以针对区域和局部事件,还需要进一步研究和探讨。
Resumo:
Reasoning about motion is an important part of our commonsense knowledge, involving fluent spatial reasoning. This work studies the qualitative and geometric knowledge required to reason in a world that consists of balls moving through space constrained by collisions with surfaces, including dissipative forces and multiple moving objects. An analog geometry representation serves the program as a diagram, allowing many spatial questions to be answered by numeric calculation. It also provides the foundation for the construction and use of place vocabulary, the symbolic descriptions of space required to do qualitative reasoning about motion in the domain. The actual motion of a ball is described as a network consisting of descriptions of qualitatively distinct types of motion. Implementing the elements of these networks in a constraint language allows the same elements to be used for both analysis and simulation of motion. A qualitative description of the actual motion is also used to check the consistency of assumptions about motion. A process of qualitative simulation is used to describe the kinds of motion possible from some state. The ambiguity inherent in such a description can be reduced by assumptions about physical properties of the ball or assumptions about its motion. Each assumption directly rules out some kinds of motion, but other knowledge is required to determine the indirect consequences of making these assumptions. Some of this knowledge is domain dependent and relies heavily on spatial descriptions.
Resumo:
¿Cómo es la trayectoria seguida por un jugador de fútbol desde que empieza a dar sus primeros pasos con el balón hasta que alcanza el rendimiento que le permita competir en la liga profesional de fútbol?, ¿cómo ocurre en el baloncesto o en el balonmano? Son muchos los factores que influirán sin duda alguna en dicho proceso. Entre dichos factores, en los últimos años, se ha considerado de forma detenida la influencia de la “practica deliberada” en el desarrollo del deportista. Sin embargo, son varios los autores y estudios que explican que no solo influye dicha practica, sino que también es muy importante la influencia del “juego deliberado”, bien en el mismo deporte, bien en otras especialidades deportivas, y que ambos tipos de practica son compatibles formando un continuum en el tiempo. Este artículo tiene como objetivo presentar el estado de la arte en torno a este debate, en el ámbito de los deportes colectivos, analizando si en los deportes colectivos los deportistas se especializan al principio en un solo deporte o bien si practican varias disciplinas deportivas para finalmente dedicarse exclusivamente a un deporte. Los resultados apuntan a que no existe un único camino en el desarrollo del deportista, y que razones de carácter social y cultural son las que realmente condicionan dicho proceso
Resumo:
ROSSI: Emergence of communication in Robots through Sensorimotor and Social Interaction, T. Ziemke, A. Borghi, F. Anelli, C. Gianelli, F. Binkovski, G. Buccino, V. Gallese, M. Huelse, M. Lee, R. Nicoletti, D. Parisi, L. Riggio, A. Tessari, E. Sahin, International Conference on Cognitive Systems (CogSys 2008), University of Karlsruhe, Karlsruhe, Germany, 2008 Sponsorship: EU-FP7
Resumo:
Burnley, M., Doust, J.H., Ball, D. and Jones, A.M. (2002) Effects of prior heavy exercise on VO2 kinetics during heavy exercise are related to changes in muscle activity. Journal of Applied Physiology 93, 167-174. RAE2008
Resumo:
Current evidence increasingly suggests that very short, supra-maximal bouts of exercise can have significant health and performance benefits. The majority of research conducted in the area however, uses laboratory-based protocols, which can lack ecological validity. The purpose of this study was to examine the effects of a high intensity sprint-training programme on hockey related performance measures. 14 semi-professional hockey players completed either a 4-week high intensity training (HIT) intervention, consisting of a total of six sessions HIT, which progressively increased in volume (n=7), or followed their normal training programme (Con; n=7). Straight-line sprint speed with and without a hockey stick and ball, and slalom sprint speed, with and without a hockey stick and ball were used as performance indicators. Maximal sprint speed over 22.9m was also assessed. Upon completion of the four-week intervention, straight-line sprint speed improved significantly in the HIT group (~3%), with no change in performance for the Con group. Slalom sprint speed, both with and without a hockey ball was not significantly different following the training programme in either group. Maximal sprint speed improved significantly (12.1%) in the HIT group, but there was no significant performance change in the Con group. The findings of this study indicate that a short period of HIT can significantly improve hockey related performance measures, and could be beneficial to athletes and coaches in field settings.
Resumo:
The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.
Resumo:
Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.
Resumo:
A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.
Resumo:
The effectiveness of service provisioning in largescale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed), or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: "How can we determine in a distributed and scalable manner the number and location of service facilities?" We propose an innovative approach in which topology and demand information is limited to neighborhoods, or balls of small radius around selected facilities, whereas demand information is captured implicitly for the remaining (remote) clients outside these neighborhoods, by mapping them to clients on the edge of the neighborhood; the ball radius regulates the trade-off between scalability and performance. We develop a scalable, distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within such balls. We show that even for small values of the radius (1 or 2), our distributed approach achieves performance under various synthetic and real Internet topologies that is comparable to that of optimal, centralized approaches requiring full topology and demand information.