920 resultados para Q-switched
Resumo:
为了优化棱镜谐振腔在调Q状态下的运转特性,充分利用琼斯矩阵光学分析棱镜谐振腔的特性,在不同材料棱镜情况下,比较了可变输出耦合率、泡克耳斯盒的消光比以及输出耦合率的调节.得到泡克耳斯盒在四分之一波长电压下能够得到比半波长电压更高的消光比,从而更适合在调Q工作状态下工作.
Resumo:
报道了一个激光二极管(LD)抽运多波长连续输出的激光器和一个被动调Q的固体激光器。该激光器的增益材料是一种新型掺Yb^3+的晶体Yb^3+:Lu2SiO5(Yb^1LSO)。当吸收的抽运功率为2.57W时,连续输出的最大功率为490mW,斜率效率为22.2%,光-光转换效率为14.2%,激光阈值为299mW,输出激光波长为1084nm。多波长输出时,波长调谐范围为1034~1085nm。利用InGaAs可饱和吸收镜实现调Q输出时,斜率效率为3.0%,激光波长为1058nm。脉冲重复频率为25~39kHz,
Resumo:
We consider the quanti fied constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quanti fication, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results.
Resumo:
This paper is devoted to the investigation of nonnegative solutions and the stability and asymptotic properties of the solutions of fractional differential dynamic linear time-varying systems involving delayed dynamics with delays. The dynamic systems are described based on q-calculus and Caputo fractional derivatives on any order.
Resumo:
Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.
Resumo:
Estimates of the Q/B ratio and parameters of equations to 'predict' Q/B values for 116 fish stocks in the Gulf of Salamanca, Colombia are presented. A compilation of these estimates available for Caribbean Sea fishes (264 stocks) is also provided for comparison purposes. General trends in the value of Q/B resulting from differences in the equation and parameter values used are briefly discussed.