682 resultados para timed automata
Resumo:
针对以测距声纳为避碰传感器的一类欠驱动型AUV,提出了一种水平面和垂直面相结合的三维实时避碰方法。根据测距声纳和欠驱动AUV 的特殊性,首先从运动规划和路径规划2 个层次提出了AUV 混合型实时避碰结构,并分别设计了基于事件反馈监控的避碰自动机和基于免疫遗传的局部路径规划算法。多种典型障碍场景的半物理仿真实验表明,论文所提方法能够实现AUV 安全、稳定的三维避碰过程。
Resumo:
针对多水下机器人(unmanned underwater vehicle,UUV)系统动态任务分配问题展开研究,在对系统的体系结构和任务分配机制进行分析的基础上,利用时延 Petri 网理论对系统的任务分配进行建模,提出了一种新的任务分配策略:群体协调层采用集中式任务分配策略,由主 UUV 将任务实时下达给各从 UUV;监控规划层采用基于适应度的分布式任务分配方式,充分发挥各异构 UUV 自身的智能。仿真结果验证了模型的合理性和任务分配策略的有效性。
Resumo:
本文用 Petri 网的一个子类——时间事件图对流水车间型和作业车间型的柔性制造系统(FMS)建模并进行理论分析,给出了可行排序的判定条件及系统中托盘数量配置与系统生产率的关系,对系统的主要性能指标,如生产周期、工件驻留时间、在制品库存等给出了定量描述.这些结果为系统的设计和运行提供了理论依据.
Resumo:
分析了时间Petri网的激发规则、托肯可用时间和抑制弧等特性,以及制造过程中随机故障的特征。提出不同的时间关联方式对应的多种建模方法,考虑不同的故障发现模式、不同的作业处理策略,建立相应的单机制造过程模型。在此基础上采用模块化和层次化方法可以构建复杂制造过程的时间着色Petri网模型,并可以转换成仿真模型,进一步分析随机机器故障对制造过程性能的影响。
Resumo:
Solar ultraviolet (UV) radiation at wavelengths less than 400 nm is an important source of energy for aeronomic processes throughout the solar system. Solar UV photons are absorbed in planetary atmospheres, as well as throughout the heliosphere, via photodissociation of molecules, photoionization of molecules and atoms, and photoexcitation toexcitation including resonance scattering. In this paper, the solar irradiances data measured by TIMED SEE, as well as the solar proxies such as F10.7 and Mg II, thermosphere neutral density of CHAMP measurements and topside ionospheric plasmas densities from DMSP, are used to analyze solar irradiance effects on the variabilities of the thermosphere and the ionosphere. First, thermosphere densities near 410 km altitude are analyzed for solar irradiance variability effects during the period 2002-2004. Correlations between the densities and the solar irradiances for different spectral lines and wavelength ranges reveal significantly different characteristics. The density correlates remarkably well with all the selected solar irradiances except the lower chromospheric O I (130.4 nm) emission. Among the chosen solar proxies, the Mg II core-to-wing ratio index, EUV (30-120 nm) and F10.7 show the highest correlations with the density for short-term (< ~27 days) variations. For both long- (> ~27 days) and short-term variations, linear correlation coefficients exhibit a decreasing trend from low latitudes towards high latitudes. The density variability can be effectively modeled (capturing 71% of the variance) using multiple solar irradiance indices, including F10.7, SEUV (the EUV 30-120 nm index), and SFUV (the FUV 120-193 nm index), in which a lag time of 1 day was used for both F10.7 and SEUV, and 5 days for SFUV. In our regression formulation SEUV has the largest contribution to the density variation (40%), with the F10.7 having the next largest contribution (32%) and SFUV accounting for the rest (28%). Furthermore, a pronounced period of about 27.2 days (mean period of the Sun's rotation) is present in both density and solar irradiance data of 2003 and 2004, and a pronounced period of about 54.4 days (doubled period of the solar rotation) is also revealed in 2004. However, soft X-ray and FUV irradiances did not present a pronounced 54.4 day period in 2004, in spite of their high correlation with the densities. The Ap index also shows 54-day periodicities in 2004, and magnetic activity, together with solar irradiance, affects the 54-day variation in density significantly. In addition, NRLMSISE00, DTM-2000 and JB2006 model predictions are compared with density measurements from CHAMP to assess their accuracy, and the results show that these models underestimate the response of the thermosphere to variations induced by solar rotation. Next, the equatorial topside ionospheric plasmas densities Ni are analyzed for solar irradiance variability effects during the period 2002-2005. Linear correlations between Ni and the solar irradiances for different wavelength ranges reveal significantly different characteristics. XUV (0-35 nm) and EUV (115-130 nm) show higher correlation with Ni for the long-term variations, whereas EUV (35-115 nm) show higher correlation for the short-term variations. Moreover, partial correlation analysis shows that the long-term variations of Ni are affected by both XUV (0-35 nm) and EUV (35-115 nm), whereas XUV (0-35 nm) play a more important role; the short-term variations of Ni are mostly affected by EUV (35-115 nm). Furthermore, a pronounced period of about 27 days is present in both Ni and solar irradiance data of 2003 and 2004, and a pronounced period of about 54 days is also revealed in 2004. Finally, prompted by previous studies that have suggested solar EUV radiation as a means of driving the semiannual variation, we investigate the intra-annual variation in thermosphere neutral density near 400 km during 2002-2005. The intra-annual variation, commonly referred to as the ‘semiannual variation’, is characterized by significant latitude structure, hemispheric asymmetries, and inter-annual variability. The magnitude of the maximum yearly difference, from the yearly minimum to the yearly maximum, varies by as much as 60% from year to year, and the phases of the minima and maxima also change by 20-40 days from year to year. Each annual harmonic of the intra-annual variation, namely, annual, semiannual, ter-annual and quatra-annual, exhibits a decreasing trend from 2002 through 2005 that is correlated with the decline in solar activity. In addition, some variations in these harmonics are correlated with geomagnetic activity, as represented by the daily mean value of Kp. Recent empirical models of the thermosphere are found to be deficient in capturing most of the latitude dependencies discovered in our data. In addition, the solar flux and geomagnetic activity proxies that we have employed do not capture some latitude and inter-annual variations detected in our data. It is possible that these variations are partly due to other effects, such as seasonal-latitudinal variations in turbopause altitude (and hence O/N2 composition) and ionosphere coupling processes that remain to be discovered in the context of influencing the intra-annual variations depicted here. Our results provide a new dataset to challenge and validate thermosphere-ionosphere general circulation models that seek to delineate the thermosphere intra-annual variation and to understand the various competing mechanisms that may contribute to its existence and variability. We furthermore suggest that the term “intra-annual” variation be adopted to describe the variability in thermosphere and ionosphere parameters that is well-captured through a superposition of annual, semiannual, ter-annual, and quatra-annual harmonic terms, and that “semiannual’ be used strictly in reference to a pure 6-monthly sinusoidal variation. Moreover, we propose the term “intra-seasonal” to refer to those shorter-term variations that arise as residuals from the above Fourier representation.
Resumo:
A study is made of the recognition and transformation of figures by iterative arrays of finite state automata. A figure is a finite rectangular two-dimensional array of symbols. The iterative arrays considered are also finite, rectangular, and two-dimensional. The automata comprising any given array are called cells and are assumed to be isomorphic and to operate synchronously with the state of a cell at time t+1 being a function of the states of it and its four nearest neighbors at time t. At time t=0 each cell is placed in one of a fixed number of initial states. The pattern of initial states thus introduced represents the figure to be processed. The resulting sequence of array states represents a computation based on the input figure. If one waits for a specially designated cell to indicate acceptance or rejection of the figure, the array is said to be working on a recognition problem. If one waits for the array to come to a stable configuration representing an output figure, the array is said to be working on a transformation problem.
Resumo:
Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Efficient storage of types within a compiler is necessary to avoid large blowups in space during compilation. Recursive types in particular are important to consider, as naive representations of recursive types may be arbitrarily larger than necessary through unfolding. Hash-consing has been used to efficiently store non-recursive types. Deterministic finite automata techniques have been used to efficiently perform various operations on recursive types. We present a new system for storing recursive types combining hash-consing and deterministic finite automata techniques. The space requirements are linear in the number of distinct types. Both update and lookup operations take polynomial time and linear space and type equality can be checked in constant time once both types are in the system.
Resumo:
Abstract: The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We consider an Ambient Calculus where ambients transport and exchange programs rather that just inert data. We propose different senses in which such a calculus can be said to be polymorphically typed, and design accordingly a polymorphic type system for it. Our type system assigns types to embedded programs and what we call behaviors to processes; a denotational semantics of behaviors is then proposed, here called trace semantics, underlying much of the remaining analysis. We state and prove a Subject Reduction property for our polymorphically typed calculus. Based on techniques borrowed from finite automata theory, type-checking of fully type-annotated processes is shown to be decidable; the time complexity of our decision procedure is exponential (this is a worst-case in theory, arguably not encountered in practice). Our polymorphically-typed calculus is a conservative extension of the typed Ambient Calculus originally proposed by Cardelli and Gordon.
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Much sensory-motor behavior develops through imitation, as during the learning of handwriting by children. Such complex sequential acts are broken down into distinct motor control synergies, or muscle groups, whose activities overlap in time to generate continuous, curved movements that obey an intense relation between curvature and speed. The Adaptive Vector Integration to Endpoint (AVITEWRITE) model of Grossberg and Paine (2000) proposed how such complex movements may be learned through attentive imitation. The model suggest how frontal, parietal, and motor cortical mechanisms, such as difference vector encoding, under volitional control from the basal ganglia, interact with adaptively-timed, predictive cerebellar learning during movement imitation and predictive performance. Key psycophysical and neural data about learning to make curved movements were simulated, including a decrease in writing time as learning progresses; generation of unimodal, bell-shaped velocity profiles for each movement synergy; size scaling with isochrony, and speed scaling with preservation of the letter shape and the shapes of the velocity profiles; an inverse relation between curvature and tangential velocity; and a Two-Thirds Power Law relation between angular velocity and curvature. However, the model learned from letter trajectories of only one subject, and only qualitative kinematic comparisons were made with previously published human data. The present work describes a quantitative test of AVITEWRITE through direct comparison of a corpus of human handwriting data with the model's performance when it learns by tracing human trajectories. The results show that model performance was variable across subjects, with an average correlation between the model and human data of 89+/-10%. The present data from simulations using the AVITEWRITE model highlight some of its strengths while focusing attention on areas, such as novel shape learning in children, where all models of handwriting and learning of other complex sensory-motor skills would benefit from further research.
Resumo:
Before choosing, it helps to know both the expected value signaled by a predictive cue and the associated uncertainty that the reward will be forthcoming. Recently, Fiorillo et al. (2003) found the dopamine (DA) neurons of the SNc exhibit sustained responses related to the uncertainty that a cure will be followed by reward, in addition to phasic responses related to reward prediction errors (RPEs). This suggests that cue-dependent anticipations of the timing, magnitude, and uncertainty of rewards are learned and reflected in components of the DA signals broadcast by SNc neurons. What is the minimal local circuit model that can explain such multifaceted reward-related learning? A new computational model shows how learned uncertainty responses emerge robustly on single trial along with phasic RPE responses, such that both types of DA responses exhibit the empirically observed dependence on conditional probability, expected value of reward, and time since onset of the reward-predicting cue. The model includes three major pathways for computing: immediate expected values of cures, timed predictions of reward magnitudes (and RPEs), and the uncertainty associated with these predictions. The first two model pathways refine those previously modeled by Brown et al. (1999). A third, newly modeled, pathway is formed by medium spiny projection neurons (MSPNs) of the matrix compartment of the striatum, whose axons co-release GABA and a neuropeptide, substance P, both at synapses with GABAergic neurons in the SNr and with the dendrites (in SNr) of DA neurons whose somas are in ventral SNc. Co-release enables efficient computation of sustained DA uncertainty responses that are a non-monotonic function of the conditonal probability that a reward will follow the cue. The new model's incorporation of a striatal microcircuit allowed it to reveals that variability in striatal cholinergic transmission can explain observed difference, between monkeys, in the amplitutude of the non-monotonic uncertainty function. Involvement of matriceal MSPNs and striatal cholinergic transmission implpies a relation between uncertainty in the cue-reward contigency and action-selection functions of the basal ganglia. The model synthesizes anatomical, electrophysiological and behavioral data regarding the midbrain DA system in a novel way, by relating the ability to compute uncertainty, in parallel with other aspects of reward contingencies, to the unique distribution of SP inputs in ventral SN.
Resumo:
Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal Values Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.