861 resultados para input output MathML LaTeX MathJax fMath JMathTeX OCR inchiostro digitale


Relevância:

100.00% 100.00%

Publicador:

Resumo:

本文针对多连杆柔性机械臂的运动轨迹控制问题,讨论了动力学建模、控制系统结构设计以及鲁棒自适应控制算法,运用假设模态方法得到了柔性机械臂动力学近似方程,通过对柔性机械臂动力学特性分析,建立了等价动力学模型,依此提出了一种鲁棒自适应控制算法,并给出了仿真研究结果。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on a viewpoint of an intricate system demanding high, this thesis advances a new concept that urban sustainable development stratagem is a high harmony and amalgamation among urban economy, geo-environment and tech-capital, and the optimum field of which lies in their mutual matching part, which quantitatively demarcates the optimum value field of urban sustainable development and establishes the academic foundation to describe and analyze sustainable development stratagem. And establishes a series of cause-effect model, a analysissitus model, flux model as well as its recognizing mode for urban system are established by the approach of System Dynamics, which can distinguish urban states by its polarity of entropy flows. At the same time, the matter flow, energy flow and information flow which exist in the course of urban development are analyzed based on the input/output (I/O) relationships of urban economy. And a new type of I/O relationships, namely new resources-environment account, are established, in which both resource and environment factors are considered. All above that settles a theoretic foundation for resource economy and environment economy as well as quantitative relationships of inter-function between urban development and geoenvironment, and gives a new approach to analyze natinal economy and urban sustainable development. Based on an analysis of the connection between resource-environmental construct of geoenvironment and urban economy development, the Geoenvironmental Carrying Capability (GeCC) is analyzed. Further more, a series of definitions and formula about the Gross Carrying Capability (GCC), Structure Carrying Capability (SCC) and Impulse Carrying Capability (ICC) is achieved, which can be applied to evaluate both the quality and capacity of geoenvironment and thereunder to determine the scale and velocity for urban development. A demonstrative study of the above is applied to Beihai city (Guangxi province, PRC), and the numerical value laws between the urban development and its geoenvironment is studied by the I/O relationship in the urban economy as following: · the relationships between the urban economic development and land use as well as consumption of underground water, metal mineral, mineral energy source, metalloid mineral and other geologic resources. · the relationships between urban economy and waste output such as industrial "3 waste", dust, rubbish and living polluted water as well as the restricting impact of both resource-environmental factors and tech-capital on the urban grow. · Optimization and control analysis on the reciprocity between urban economy and its geoenvironment are discussed, and sensitive factors and its order of the urban geoenvironmental resources, wastes and economic sections are fixed, which can be applied to determine the urban industrial structure, scale, grow rate matching with its geoenvironment and tech-capital. · a sustainable development stratagem for the city is suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

M. H. Lee, and S. M. Garrett, Qualitative modelling of unknown interface behaviour, International Journal of Human Computer Studies, Vol. 53, No. 4, pp. 493-515, 2000

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a Lyapunov function candidate is introduced for multivariable systems with inner delays, without assuming a priori stability for the nondelayed subsystem. By using this Lyapunov function, a controller is deduced. Such a controller utilizes an input-output description of the original system, a circumstance that facilitates practical applications of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nonparametric probability estimation procedure using the fuzzy ARTMAP neural network is here described. Because the procedure does not make a priori assumptions about underlying probability distributions, it yields accurate estimates on a wide variety of prediction tasks. Fuzzy ARTMAP is used to perform probability estimation in two different modes. In a 'slow-learning' mode, input-output associations change slowly, with the strength of each association computing a conditional probability estimate. In 'max-nodes' mode, a fixed number of categories are coded during an initial fast learning interval, and weights are then tuned by slow learning. Simulations illustrate system performance on tasks in which various numbers of clusters in the set of input vectors mapped to a given class.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Training data for supervised learning neural networks can be clustered such that the input/output pairs in each cluster are redundant. Redundant training data can adversely affect training time. In this paper we apply two clustering algorithms, ART2 -A and the Generalized Equality Classifier, to identify training data clusters and thus reduce the training data and training time. The approach is demonstrated for a high dimensional nonlinear continuous time mapping. The demonstration shows six-fold decrease in training time at little or no loss of accuracy in the handling of evaluation data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2015 IEEE.We consider the problem of verification of software implementations of linear time-invariant controllers. Commonly, different implementations use different representations of the controller's state, for example due to optimizations in a third-party code generator. To accommodate this variation, we exploit input-output controller specification captured by the controller's transfer function and show how to automatically verify correctness of C code controller implementations using a Frama-C/Why3/Z3 toolchain. Scalability of the approach is evaluated using randomly generated controller specifications of realistic size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes modeling technology and its use in providing data governing the assembly and subsequent reliability of electronic chip components on printed circuit boards (PCBs). Products, such as mobile phones, camcorders, intelligent displays, etc., are changing at a tremendous rate where newer technologies are being applied to satisfy the demands for smaller products with increased functionality. At ever decreasing dimensions, and increasing number of input/output connections, the design of these components, in terms of dimensions and materials used, is playing a key role in determining the reliability of the final assembly. Multiphysics modeling techniques are being adopted to predict a range of interacting physics-based phenomena associated with the manufacturing process. For example, heat transfer, solidification, marangoni fluid flow, void movement, and thermal-stress. The modeling techniques used are based on finite volume methods that are conservative and take advantage of being able to represent the physical domain using an unstructured mesh. These techniques are also used to provide data on thermal induced fatigue which is then mapped into product lifetime predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel circuit design technique is presented which improves gain-accuracy and linearity in differential amplifiers. The technique employs negative impedance compensation and results demonstrate a significant performance improvement in precision, lowering sensitivity, and wide dynamic range. A theoretical underpinning is given together with the results of a demonstrator differential input/output amplifier with gain of 12 dB. The simulation results show that, with the novel method, both the gain-accuracy and linearity can be improved greatly. Especially, the linearity improvement in IMD can get to more than 23 dB with a required gain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

eScience is an umbrella concept which covers internet technologies, such as web service orchestration that involves manipulation and processing of high volumes of data, using simple and efficient methodologies. This concept is normally associated with bioinformatics, but nothing prevents the use of an identical approach for geoinfomatics and OGC (Open Geospatial Consortium) web services like WPS (Web Processing Service). In this paper we present an extended WPS implementation based on the PyWPS framework using an automatically generated WSDL (Web Service Description Language) XML document that replicates the WPS input/output document structure used during an Execute request to a server. Services are accessed using a modified SOAP (Simple Object Access Protocol) interface provided by PyWPS, that uses service and input/outputs identifiers as element names. The WSDL XML document is dynamically generated by applying XSLT (Extensible Stylesheet Language Transformation) to the getCapabilities XML document that is generated by PyWPS. The availability of the SOAP interface and WSDL description allows WPS instances to be accessible to workflow development software like Taverna, enabling users to build complex workflows using web services represented by interconnecting graphics. Taverna will transform the visual representation of the workflow into a SCUFL (Simple Conceptual Unified Flow Language) based XML document that can be run internally or sent to a Taverna orchestration server. SCUFL uses a dataflow-centric orchestration model as opposed to the more commonly used orchestration language BPEL (Business Process Execution Language) which is process-centric.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the relative efficiency of UK credit unions. Radial and non-radial measures of input cost efficiency plus associated scale efficiency measures are computed for a selection of input output specifications. Both measures highlighted that UK credit unions have considerable scope for efficiency gains. It was mooted that the documented high levels of inefficiency may be indicative of the fact that credit unions, based on clearly defined and non-overlapping common bonds, are not in competition with each other for market share. Credit unions were also highlighted as suffering from a considerable degree of scale inefficiency with the majority of scale inefficient credit unions subject to decreasing returns to scale. The latter aspect highlights that the UK Government's goal of larger credit unions must be accompanied by greater regulatory freedom if inefficiency is to be avoided. One of the advantages of computing non-radial measures is that an insight into potential over- or under-expenditure on specific inputs can be obtained through a comparison of the non-radial measure of efficiency with the associated radial measure. Two interesting findings emerged, the first that UK credit unions over-spend on dividend payments and the second that they under-spend on labour costs.