860 resultados para Matriz Input-Output
Resumo:
This paper uses dissipativity theory to provide the system-theoretic description of a basic oscillation mechanism. Elementary input-output tools are then used to prove the existence and stability of limit cycles in these "oscillators". The main benefit of the proposed approach is that it is well suited for the analysis and design of interconnections, thus providing a valuable mathematical tool for the study of networks of coupled oscillators.
Resumo:
This article presents a framework that describes formally the underlying unsteady and conjugate heat transfer processes that are undergone in thermodynamic systems, along with results from its application to the characterization of thermodynamic losses due to irreversible heat transfer during reciprocating compression and expansion processes in a gas spring. Specifically, a heat transfer model is proposed that solves the one-dimensional unsteady heat conduction equation in the solid simultaneously with the first law in the gas phase, with an imposed heat transfer coefficient taken from suitable experiments in gas springs. Even at low volumetric compression ratios (of 2.5), notable effects of unsteady heat transfer to the solid walls are revealed, with thermally induced thermodynamic cycle (work) losses of up to 14% (relative to the work input/output in equivalent adiabatic and reversible compression/expansion processes) at intermediate Péclet numbers (i.e., normalized frequencies) when unfavorable solid and gas materials are selected, and closer to 10-12% for more common material choices. The contribution of the solid toward these values, through the conjugate variations attributed to the thickness of the cylinder wall, is about 8% and 2% points, respectively, showing a maximum at intermediate thicknesses. At higher compression ratios (of 6) a 19% worst-case loss is reported for common materials. These results suggest strongly that in designing high-efficiency reciprocating machines the full conjugate and unsteady problem must be considered and that the role of the solid in determining performance cannot, in general, be neglected. © 2014 Richard Mathie, Christos N. Markides, and Alexander J. White. Published with License by Taylor & Francis.
Resumo:
© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.
Resumo:
For a four-port microracetrack channel drop filter, unexpected transmission characteristics due to strong dispersive coupling are demonstrated by the light tunneling between the input-output waveguides and the resonator, where a large dropping transmission at off-resonance wavelengths is observed by finite-difference time-domain simulation. It causes a severe decline of the extinction ratio and finesse. An appropriate decrease of the coupling strength is found to suppress the dispersive coupling and greately increase the extinction ratio and finesse, a decreased coupling strength can be realized by the application of an asymmetrical coupling waveguide structure. In addition, the profile of the coupling dispersion in the transmission spectra can be predicted based on a coupled mode theory analysis of an equivalent system consisting of two coupling straight waveguides. The effects of structure parameters on the transmission spectra obtained by this method agree well with the numerical results. It is useful to avoid the strong dispersive coupling region in the filter design. (c) 2007 Optical Society of America.
Resumo:
A design for an IO block array in a tile-based FPGA is presented.Corresponding with the characteristics of the FPGA, each IO cell is composed of a signal path, local routing pool and configurable input/output buffers.Shared programmable registers in the signal path can be configured for the function of JTAG, without specific boundary scan registers/latches, saving layout area.The local routing pool increases the flexibility of routing and the routability of the whole FPGA.An auxiliary power supply is adopted to increase the performance of the IO buffers at different configured IO standards.The organization of the IO block array is described in an architecture description file, from which the array layout can be accomplished through use of an automated layout assembly tool.This design strategy facilitates the design of FPGAs with different capacities or architectures in an FPGA family series.The bond-out schemes of the same FPGA chip in different packages are also considered.The layout is based on SMIC 0.13μm logic 1P8M salicide 1.2/2.5 V CMOS technology.Our performance is comparable with commercial SRAM-based FPGAs which use a similar process.
Resumo:
An improved 2 ×2 silicon-on-insulator Mach-Zehnder thermo-optical switch is designed and fabricated, which is based on strongly guided multimode interference couplers and single- mode phase-shifting arms. The multimode interference couplers and input/output waveguides are deeply etched to improve coupler performances and coupler-waveguide coupling efficiencies. However, shallow etching is used in the phase-shifting arms to guarantee single-mode property. The strongly guided coupler presents an attractive uniformity about 0. 03 dB and a low propagation loss of -0.6 dB. The 2× 2 switch shows an insertion loss as low as -6.8 dB, where the fiber-waveguide coupling loss of -4.3 dB is included, and the response-time is measured as short as 6.8 μs, which are much better than our previous results.
Resumo:
A novel design of 100GHz-spaced 16channel arrayed-waveguide grating (AWG) based on silica-on-silicon chip is reported.AWG is achieved by adding a Y-branch to the AWG and arranging the input/output channel in a neat row,so the whole configuration can be aligned and packaged using only one fiber-array.This configuration can decrease the device's size,enlarge the minimum radius of curvature,save time on polishing and alignment,and reduce the chip's fabrication cost.
Resumo:
Single photon Sagnac interferometry as a probe to macroscopic quantum mechanics is considered at the theoretical level. For a freely moving macroscopic quantum mirror susceptible to radiation pressure force inside a Sagnac interferometer, a careful analysis of the input-output relation reveals that the particle spectrum readout at the bright and dark ports encode information concerning the noncommutativity of position and momentum of the macroscopic mirror. A feasible experimental scheme to probe the commutation relation of a macroscopic quantum mirror is outlined to explore the possible frontier between classical and quantum regimes. In the Appendix, the case of Michelson interferometry as a feasible probe is also sketched.
Resumo:
给出了系统的研究模型,指出系统控制和设计必须考虑的3个关键问题:稳定性、透明性和时延处理.阐述了4个主要的稳定性分析方法:Lyapunov稳定性、输入输出稳定性、无源稳定性和基于事件的稳定性,总结了这些方法的优势和局限性.接着,给出了几种主要的控制策略,指出了现有控制方法的优缺点.最后,提出了进一步的主要研究方向.
Resumo:
本文主要研究基于跟随领航者法的多 UUV(unmanned underwater vehicle)队形控制。在 UUV 载体坐标系下建立系统的运动学模型,该模型是对笛卡尔坐标系下的运动学模型的改进,避免了极坐标系下奇异点的出现。该模型经过输入输出反馈线性化,获得稳定的队形控制器。同时,为了缩小队形控制律中的控制参数的调整范围,本文提出了辅助算法,在此基础上分析参数的有效范围。将队形控制律在多 UUV 数字仿真平台上验证,证实了改进的运动学模型和控制律的有效性。
Resumo:
本文针对多连杆柔性机械臂的运动轨迹控制问题,讨论了动力学建模、控制系统结构设计以及鲁棒自适应控制算法,运用假设模态方法得到了柔性机械臂动力学近似方程,通过对柔性机械臂动力学特性分析,建立了等价动力学模型,依此提出了一种鲁棒自适应控制算法,并给出了仿真研究结果。
Resumo:
Based on a viewpoint of an intricate system demanding high, this thesis advances a new concept that urban sustainable development stratagem is a high harmony and amalgamation among urban economy, geo-environment and tech-capital, and the optimum field of which lies in their mutual matching part, which quantitatively demarcates the optimum value field of urban sustainable development and establishes the academic foundation to describe and analyze sustainable development stratagem. And establishes a series of cause-effect model, a analysissitus model, flux model as well as its recognizing mode for urban system are established by the approach of System Dynamics, which can distinguish urban states by its polarity of entropy flows. At the same time, the matter flow, energy flow and information flow which exist in the course of urban development are analyzed based on the input/output (I/O) relationships of urban economy. And a new type of I/O relationships, namely new resources-environment account, are established, in which both resource and environment factors are considered. All above that settles a theoretic foundation for resource economy and environment economy as well as quantitative relationships of inter-function between urban development and geoenvironment, and gives a new approach to analyze natinal economy and urban sustainable development. Based on an analysis of the connection between resource-environmental construct of geoenvironment and urban economy development, the Geoenvironmental Carrying Capability (GeCC) is analyzed. Further more, a series of definitions and formula about the Gross Carrying Capability (GCC), Structure Carrying Capability (SCC) and Impulse Carrying Capability (ICC) is achieved, which can be applied to evaluate both the quality and capacity of geoenvironment and thereunder to determine the scale and velocity for urban development. A demonstrative study of the above is applied to Beihai city (Guangxi province, PRC), and the numerical value laws between the urban development and its geoenvironment is studied by the I/O relationship in the urban economy as following: · the relationships between the urban economic development and land use as well as consumption of underground water, metal mineral, mineral energy source, metalloid mineral and other geologic resources. · the relationships between urban economy and waste output such as industrial "3 waste", dust, rubbish and living polluted water as well as the restricting impact of both resource-environmental factors and tech-capital on the urban grow. · Optimization and control analysis on the reciprocity between urban economy and its geoenvironment are discussed, and sensitive factors and its order of the urban geoenvironmental resources, wastes and economic sections are fixed, which can be applied to determine the urban industrial structure, scale, grow rate matching with its geoenvironment and tech-capital. · a sustainable development stratagem for the city is suggested.
Resumo:
Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Resumo:
Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.
Resumo:
I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.