834 resultados para Input-output analysis (IOA)
Resumo:
We consider a large scale network of interconnected heterogeneous dynamical components. Scalable stability conditions are derived that involve the input/output properties of individual subsystems and the interconnection matrix. The analysis is based on the Davis-Wielandt shell, a higher dimensional version of the numerical range with important convexity properties. This can be used to allow heterogeneity in the agent dynamics while relaxing normality and symmetry assumptions on the interconnection matrix. The results include small gain and passivity approaches as special cases, with the three dimensional shell shown to be inherently connected with corresponding graph separation arguments. © 2012 Society for Industrial and Applied Mathematics.
Resumo:
This paper uses dissipativity theory to provide the system-theoretic description of a basic oscillation mechanism. Elementary input-output tools are then used to prove the existence and stability of limit cycles in these "oscillators". The main benefit of the proposed approach is that it is well suited for the analysis and design of interconnections, thus providing a valuable mathematical tool for the study of networks of coupled oscillators.
Resumo:
© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.
Resumo:
For a four-port microracetrack channel drop filter, unexpected transmission characteristics due to strong dispersive coupling are demonstrated by the light tunneling between the input-output waveguides and the resonator, where a large dropping transmission at off-resonance wavelengths is observed by finite-difference time-domain simulation. It causes a severe decline of the extinction ratio and finesse. An appropriate decrease of the coupling strength is found to suppress the dispersive coupling and greately increase the extinction ratio and finesse, a decreased coupling strength can be realized by the application of an asymmetrical coupling waveguide structure. In addition, the profile of the coupling dispersion in the transmission spectra can be predicted based on a coupled mode theory analysis of an equivalent system consisting of two coupling straight waveguides. The effects of structure parameters on the transmission spectra obtained by this method agree well with the numerical results. It is useful to avoid the strong dispersive coupling region in the filter design. (c) 2007 Optical Society of America.
Resumo:
Single photon Sagnac interferometry as a probe to macroscopic quantum mechanics is considered at the theoretical level. For a freely moving macroscopic quantum mirror susceptible to radiation pressure force inside a Sagnac interferometer, a careful analysis of the input-output relation reveals that the particle spectrum readout at the bright and dark ports encode information concerning the noncommutativity of position and momentum of the macroscopic mirror. A feasible experimental scheme to probe the commutation relation of a macroscopic quantum mirror is outlined to explore the possible frontier between classical and quantum regimes. In the Appendix, the case of Michelson interferometry as a feasible probe is also sketched.
Resumo:
给出了系统的研究模型,指出系统控制和设计必须考虑的3个关键问题:稳定性、透明性和时延处理.阐述了4个主要的稳定性分析方法:Lyapunov稳定性、输入输出稳定性、无源稳定性和基于事件的稳定性,总结了这些方法的优势和局限性.接着,给出了几种主要的控制策略,指出了现有控制方法的优缺点.最后,提出了进一步的主要研究方向.
Resumo:
Based on a viewpoint of an intricate system demanding high, this thesis advances a new concept that urban sustainable development stratagem is a high harmony and amalgamation among urban economy, geo-environment and tech-capital, and the optimum field of which lies in their mutual matching part, which quantitatively demarcates the optimum value field of urban sustainable development and establishes the academic foundation to describe and analyze sustainable development stratagem. And establishes a series of cause-effect model, a analysissitus model, flux model as well as its recognizing mode for urban system are established by the approach of System Dynamics, which can distinguish urban states by its polarity of entropy flows. At the same time, the matter flow, energy flow and information flow which exist in the course of urban development are analyzed based on the input/output (I/O) relationships of urban economy. And a new type of I/O relationships, namely new resources-environment account, are established, in which both resource and environment factors are considered. All above that settles a theoretic foundation for resource economy and environment economy as well as quantitative relationships of inter-function between urban development and geoenvironment, and gives a new approach to analyze natinal economy and urban sustainable development. Based on an analysis of the connection between resource-environmental construct of geoenvironment and urban economy development, the Geoenvironmental Carrying Capability (GeCC) is analyzed. Further more, a series of definitions and formula about the Gross Carrying Capability (GCC), Structure Carrying Capability (SCC) and Impulse Carrying Capability (ICC) is achieved, which can be applied to evaluate both the quality and capacity of geoenvironment and thereunder to determine the scale and velocity for urban development. A demonstrative study of the above is applied to Beihai city (Guangxi province, PRC), and the numerical value laws between the urban development and its geoenvironment is studied by the I/O relationship in the urban economy as following: · the relationships between the urban economic development and land use as well as consumption of underground water, metal mineral, mineral energy source, metalloid mineral and other geologic resources. · the relationships between urban economy and waste output such as industrial "3 waste", dust, rubbish and living polluted water as well as the restricting impact of both resource-environmental factors and tech-capital on the urban grow. · Optimization and control analysis on the reciprocity between urban economy and its geoenvironment are discussed, and sensitive factors and its order of the urban geoenvironmental resources, wastes and economic sections are fixed, which can be applied to determine the urban industrial structure, scale, grow rate matching with its geoenvironment and tech-capital. · a sustainable development stratagem for the city is suggested.
Resumo:
M. H. Lee, and S. M. Garrett, Qualitative modelling of unknown interface behaviour, International Journal of Human Computer Studies, Vol. 53, No. 4, pp. 493-515, 2000
Resumo:
A integridade do sinal em sistemas digitais interligados de alta velocidade, e avaliada através da simulação de modelos físicos (de nível de transístor) é custosa de ponto vista computacional (por exemplo, em tempo de execução de CPU e armazenamento de memória), e exige a disponibilização de detalhes físicos da estrutura interna do dispositivo. Esse cenário aumenta o interesse pela alternativa de modelação comportamental que descreve as características de operação do equipamento a partir da observação dos sinais eléctrico de entrada/saída (E/S). Os interfaces de E/S em chips de memória, que mais contribuem em carga computacional, desempenham funções complexas e incluem, por isso, um elevado número de pinos. Particularmente, os buffers de saída são obrigados a distorcer os sinais devido à sua dinâmica e não linearidade. Portanto, constituem o ponto crítico nos de circuitos integrados (CI) para a garantia da transmissão confiável em comunicações digitais de alta velocidade. Neste trabalho de doutoramento, os efeitos dinâmicos não-lineares anteriormente negligenciados do buffer de saída são estudados e modulados de forma eficiente para reduzir a complexidade da modelação do tipo caixa-negra paramétrica, melhorando assim o modelo standard IBIS. Isto é conseguido seguindo a abordagem semi-física que combina as características de formulação do modelo caixa-negra, a análise dos sinais eléctricos observados na E/S e propriedades na estrutura física do buffer em condições de operação práticas. Esta abordagem leva a um processo de construção do modelo comportamental fisicamente inspirado que supera os problemas das abordagens anteriores, optimizando os recursos utilizados em diferentes etapas de geração do modelo (ou seja, caracterização, formulação, extracção e implementação) para simular o comportamento dinâmico não-linear do buffer. Em consequência, contributo mais significativo desta tese é o desenvolvimento de um novo modelo comportamental analógico de duas portas adequado à simulação em overclocking que reveste de um particular interesse nas mais recentes usos de interfaces de E/S para memória de elevadas taxas de transmissão. A eficácia e a precisão dos modelos comportamentais desenvolvidos e implementados são qualitativa e quantitativamente avaliados comparando os resultados numéricos de extracção das suas funções e de simulação transitória com o correspondente modelo de referência do estado-da-arte, IBIS.
Resumo:
Tese de doutoramento, Medicina (Neurologia), Universidade de Lisboa, Faculdade de Medicina, 2015
Resumo:
The advantages a DSL and the benefits its use potentially brings imply that informed decisions on the design of a domain specific language are of paramount importance for its use. We believe that the foundations of such decisions should be informed by analysis of data empirically collected from systems to highlight salient features that should then form the basis of a DSL. To support this theory, we describe an empirical study of a large OSS called Barcode, written in C, and from which we collected two well-known 'slice' based metrics. We analyzed multiple versions of the system and sliced its functions in three separate ways (i.e., input, output and global variables). The purpose of the study was to try and identify sensitivities and traits in those metrics that might inform features of a potential slice-based DSL. Results indicated that cohesion was adversely affected through the use of global variables and that appreciation of the role of function inputs and outputs can be revealed through slicing. The study presented is motivated primarily by the problems with current tools and interfaces experienced directly by the authors in extracting slicing data and the need to promote the benefits that analysis of slice data and slicing in general can offer.
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
An approach for the analysis of uncertainty propagation in reliability-based design optimization of composite laminate structures is presented. Using the Uniform Design Method (UDM), a set of design points is generated over a domain centered on the mean reference values of the random variables. A methodology based on inverse optimal design of composite structures to achieve a specified reliability level is proposed, and the corresponding maximum load is outlined as a function of ply angle. Using the generated UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on an evolutionary learning process. Then, a Monte Carlo simulation using ANN development is performed to simulate the behavior of the critical Tsai number, structural reliability index, and their relative sensitivities as a function of the ply angle of laminates. The results are generated for uniformly distributed random variables on a domain centered on mean values. The statistical analysis of the results enables the study of the variability of the reliability index and its sensitivity relative to the ply angle. Numerical examples showing the utility of the approach for robust design of angle-ply laminates are presented.
Resumo:
The aim of this paper is to demonstrate that, even if Marx's solution to the transformation problem can be modified, his basic conclusions remain valid. the proposed alternative solution which is presented hare is based on the constraint of a common general profit rate in both spaces and a money wage level which will be determined simultaneously with prices.
Resumo:
The aim of this paper is to demonstrate that, even if Marx's solution to the transformation problem can be modified, his basic concusions remain valid.