854 resultados para dynamic modeling and simulation
Resumo:
Modeling work in neuroscience can be classified using two different criteria. The first one is the complexity of the model, ranging from simplified conceptual models that are amenable to mathematical analysis to detailed models that require simulations in order to understand their properties. The second criterion is that of direction of workflow, which can be from microscopic to macroscopic scales (bottom-up) or from behavioral target functions to properties of components (top-down). We review the interaction of theory and simulation using examples of top-down and bottom-up studies and point to some current developments in the fields of computational and theoretical neuroscience.
Resumo:
A comparison study was carried out between a wireless sensor node with a bare die flip-chip mounted and its reference board with a BGA packaged transceiver chip. The main focus is the return loss (S parameter S11) at the antenna connector, which was highly depended on the impedance mismatch. Modeling including the different interconnect technologies, substrate properties and passive components, was performed to simulate the system in Ansoft Designer software. Statistical methods, such as the use of standard derivation and regression, were applied to the RF performance analysis, to see the impacts of the different parameters on the return loss. Extreme value search, following on the previous analysis, can provide the parameters' values for the minimum return loss. Measurements fit the analysis and simulation well and showed a great improvement of the return loss from -5dB to -25dB for the target wireless sensor node.
Resumo:
A modeling strategy is presented to solve the governing equations of fluid flow, temperature (with solidification), and stress in an integrated manner. These equations are discretized using finite volume methods on unstructured grids, which provide the capability to represent complex domains. Both the cell-centered and vertex-based forms of the finite volume discretization procedure are explained, and the overall integrated solution procedure using these techniques with suitable solvers is detailed. Two industrial processes, based on the casting of metals, are used to demonstrate the capabilities of the resultant modeling framework. This manufacturing process requires a high degree of coupling between the governing physical equations to accurately predict potential defects. Comparisons between model predictions and experimental observations are given.
Resumo:
This paper investigates an isothermal fatigue test for solder joints developed at the NPL. The test specimen is a lap joint between two copper arms. During the test the displacement at the ends of the copper are controlled and the force measured. The modeling results in the paper show that the displacement across the solder joint is not equal to the displacement applied at the end of the specimen. This is due to deformation within the copper arms. A method is described to compensate for this difference. The strain distribution in the solder was determined by finite element analysis and compared to the distribution generated by a theoretical 'ideal' test which generates an almost pure shear mode in the solder. By using a damage-based constitutive law the shape of the crack generated in the specimen has been predicted for both the actual test and the ideal pure shear test. Results from the simulations are also compared with experimental data using SnAgCu solder.
Resumo:
This paper addresses the problem of optimally locating intermodal freight terminals in Serbia. To solve this problem and determine the effects of the resulting scenarios, two modeling approaches were combined. The first approach is based on multiple-assignment hub-network design, and the second is based on simulation. The multiple-assignment p-hub network location model was used to determine the optimal location of intermodal terminals. Simulation was used as a tool to estimate intermodal transport flow volumes, due to the unreliability and unavailability of specific statistical data, and as a method for quantitatively analyzing the economic, time, and environmental effects of different scenarios of intermodal terminal development. The results presented here represent a summary, with some extension, of the research realized in the IMOD-X project (Intermodal Solutions for Competitive Transport in Serbia).
Resumo:
A combined experimental-computational study on the CO absorption on 1-butyl-3-methylimidazolium hexafluophosphate, 1-ethyl-3-methylimidazolium bis[trifluoromethylsulfonyl]imide, and 1-butyl-3-methylimidazolium bis[trifluoromethylsulfonyl]imide ionic liquids is reported. The reported results allowed to infer a detailed nanoscopic vision of the absorption phenomena as a function of pressure and temperature. Absorption isotherms were measured at 318 and 338K for pressures up to 20MPa for ultrapure samples using a state-of-the-art magnetic suspension densimeter, for which measurement procedures are developed. A remarkable swelling effect upon CO absorption was observed for pressures higher than 10MPa, which was corrected using a method based on experimental volumetric data. The experimental data reported in this work are in good agreement with available literature isotherms. Soave-Redlich-Kwong and Peng-Robinson equations of state coupled with bi-parametric van der Waals mixing rule were used for successful correlations of experimental high pressure absorption data. Molecular dynamics results allowed to infer structural, energetic and dynamic properties of the studied CO+ionic liquids mixed fluids, showing the relevant role of the strength of anion-cation interactions on fluid volumetric properties and CO absorption. © 2012 Elsevier B.V.
Resumo:
The community-wide GPCR Dock assessment is conducted to evaluate the status of molecular modeling and ligand docking for human G protein-coupled receptors. The present round of the assessment was based on the recent structures of dopamine D3 and CXCR4 chemokine receptors bound to small molecule antagonists and CXCR4 with a synthetic cyclopeptide. Thirty-five groups submitted their receptor-ligand complex structure predictions prior to the release of the crystallographic coordinates. With closely related homology modeling templates, as for dopamine D3 receptor, and with incorporation of biochemical and QSAR data, modern computational techniques predicted complex details with accuracy approaching experimental. In contrast, CXCR4 complexes that had less-characterized interactions and only distant homology to the known GPCR structures still remained very challenging. The assessment results provide guidance for modeling and crystallographic communities in method development and target selection for further expansion of the structural coverage of the GPCR universe.
Resumo:
Low-power processors and accelerators that were originally designed for the embedded systems market are emerging as building blocks for servers. Power capping has been actively explored as a technique to reduce the energy footprint of high-performance processors. The opportunities and limitations of power capping on the new low-power processor and accelerator ecosystem are less understood. This paper presents an efficient power capping and management infrastructure for heterogeneous SoCs based on hybrid ARM/FPGA designs. The infrastructure coordinates dynamic voltage and frequency scaling with task allocation on a customised Linux system for the Xilinx Zynq SoC. We present a compiler-assisted power model to guide voltage and frequency scaling, in conjunction with workload allocation between the ARM cores and the FPGA, under given power caps. The model achieves less than 5% estimation bias to mean power consumption. In an FFT case study, the proposed power capping schemes achieve on average 97.5% of the performance of the optimal execution and match the optimal execution in 87.5% of the cases, while always meeting power constraints.
Resumo:
Esta tese descreve uma framework de trabalho assente no paradigma multi-camada para analisar, modelar, projectar e optimizar sistemas de comunicação. Nela se explora uma nova perspectiva acerca da camada física que nasce das relações entre a teoria de informação, estimação, métodos probabilísticos, teoria da comunicação e codificação. Esta framework conduz a métodos de projecto para a próxima geração de sistemas de comunicação de alto débito. Além disso, a tese explora várias técnicas de camada de acesso com base na relação entre atraso e débito para o projeto de redes sem fio tolerantes a atrasos. Alguns resultados fundamentais sobre a interação entre a teoria da informação e teoria da estimação conduzem a propostas de um paradigma alternativo para a análise, projecto e optimização de sistemas de comunicação. Com base em estudos sobre a relação entre a informação recíproca e MMSE, a abordagem descrita na tese permite ultrapassar, de forma inovadora, as dificuldades inerentes à optimização das taxas de transmissão de informação confiáveis em sistemas de comunicação, e permite a exploração da atribuição óptima de potência e estruturas óptimas de pre-codificação para diferentes modelos de canal: com fios, sem fios e ópticos. A tese aborda também o problema do atraso, numa tentativa de responder a questões levantadas pela enorme procura de débitos elevados em sistemas de comunicação. Isso é feito através da proposta de novos modelos para sistemas com codificação de rede (network coding) em camadas acima da sua camada física. Em particular, aborda-se a utilização de sistemas de codificação em rede para canais que variam no tempo e são sensíveis a atrasos. Isso foi demonstrado através da proposta de um novo modelo e esquema adaptativo, cujos algoritmos foram aplicados a sistemas sem fios com desvanecimento (fading) complexo, de que são exemplos os sistemas de comunicação via satélite. A tese aborda ainda o uso de sistemas de codificação de rede em cenários de transferência (handover) exigentes. Isso é feito através da proposta de novos modelos de transmissão WiFi IEEE 801.11 MAC, que são comparados com codificação de rede, e que se demonstram possibilitar transferência sem descontinuidades. Pode assim dizer-se que esta tese, através de trabalho de análise e de propostas suportadas por simulações, defende que na concepção de sistemas de comunicação se devem considerar estratégias de transmissão e codificação que sejam não só próximas da capacidade dos canais, mas também tolerantes a atrasos, e que tais estratégias têm de ser concebidas tendo em vista características do canal e a camada física.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.