969 resultados para Simulation experiments
Resumo:
Today, the trend within the electronics industry is for the use of rapid and advanced simulation methodologies in association with synthesis toolsets. This paper presents an approach developed to support mixed-signal circuit design and analysis. The methodology proposed shows a novel approach to the problem of developing behvioural model descriptions of mixed-signal circuit topologies, by construction of a set of subsystems, that supports the automated mapping of MATLAB®/SIMULINK® models to structural VHDL-AMS descriptions. The tool developed, named MS 2SV, reads a SIMULINK® model file and translates it to a structural VHDL-AMS code. It also creates the file structure required to simulate the translated model in the System Vision™. To validate the methodology and the developed program, the DAC08, AD7524 and AD5450 data converters were studied and initially modelled in MATLAB®/ SIMULINK®. The VHDL-AMS code generated automatically by MS 2SV, (MATLAB®/SIMULINK® to System Vision™), was then simulated in the System Vision™. The simulation results show that the proposed approach, which is based on VHDL-AMS descriptions of the original model library elements, allows for the behavioural level simulation of complex mixed-signal circuits.
Resumo:
Community ecology seeks to understand and predict the characteristics of communities that can develop under different environmental conditions, but most theory has been built on analytical models that are limited in the diversity of species traits that can be considered simultaneously. We address that limitation with an individual-based model to simulate assembly of fish communities characterized by life history and trophic interactions with multiple physiological tradeoffs as constraints on species performance. Simulation experiments were carried out to evaluate the distribution of 6 life history and 4 feeding traits along gradients of resource productivity and prey accessibility. These experiments revealed that traits differ greatly in importance for species sorting along the gradients. Body growth rate emerged as a key factor distinguishing community types and defining patterns of community stability and coexistence, followed by egg size and maximum body size. Dominance by fast-growing, relatively large, and fecund species occurred more frequently in cases where functional responses were saturated (i.e. high productivity and/or prey accessibility). Such dominance was associated with large biomass fluctuations and priority effects, which prevented richness from increasing with productivity and may have limited selection on secondary traits, such as spawning strategies and relative size at maturation. Our results illustrate that the distribution of species traits and the consequences for community dynamics are intimately linked and strictly dependent on how the benefits and costs of these traits are balanced across different conditions. © 2012 Elsevier B.V.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Translucent wavelength-division multiplexing optical networks use sparse placement of regenerators to overcome physical impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks at a much less cost. In previous studies, we addressed the placement of regenerators based on static schemes, allowing for only a limited number of regenerators at fixed locations. This paper furthers those studies by proposing a dynamic resource allocation and dynamic routing scheme to operate translucent networks. This scheme is realized through dynamically sharing regeneration resources, including transmitters, receivers, and electronic interfaces, between regeneration and access functions under a multidomain hierarchical translucent network model. An intradomain routing algorithm, which takes into consideration optical-layer constraints as well as dynamic allocation of regeneration resources, is developed to address the problem of translucent dynamic routing in a single routing domain. Network performance in terms of blocking probability, resource utilization, and running times under different resource allocation and routing schemes is measured through simulation experiments.
Resumo:
An analytical model for Virtual Topology Reconfiguration (VTR) in optical networks is developed. It aims at the optical networks with a circuit-based data plane and an IPlike control plane. By identifying and analyzing the important factors impacting the network performance due to VTR operations on both planes, we can compare the benefits and penalties of different VTR algorithms and policies. The best VTR scenario can be adaptively chosen from a set of such algorithms and policies according to the real-time network situations. For this purpose, a cost model integrating all these factors is created to provide a comparison criterion independent of any specific VTR algorithm and policy. A case study based on simulation experiments is conducted to illustrate the application of our models.
Resumo:
Network virtualization is a promising technique for building the Internet of the future since it enables the low cost introduction of new features into network elements. An open issue in such virtualization is how to effect an efficient mapping of virtual network elements onto those of the existing physical network, also called the substrate network. Mapping is an NP-hard problem and existing solutions ignore various real network characteristics in order to solve the problem in a reasonable time frame. This paper introduces new algorithms to solve this problem based on 0–1 integer linear programming, algorithms based on a whole new set of network parameters not taken into account by previous proposals. Approximative algorithms proposed here allow the mapping of virtual networks on large network substrates. Simulation experiments give evidence of the efficiency of the proposed algorithms.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
This experimental thesis concerns the study of the long-term behaviour of ancient bronzes recently excavated from burial conditions. The scientific interest is to clarify the effect of soil parameters on the degradation mechanisms of ancient bronze alloy. The work took into consideration bronzes recovered from the archaeological sites in the region of Dobrudja, Romania. The first part of research work was dedicated to the characterization of bronze artefacts using non destructive (micro-FTIR, reflectance mode) and micro-destructive (based on sampling and analysis of a stratigraphical section by OM and SEM-EDX) methods. Burial soils were geologically classified and analyzed by chemical methods (pH, conductivity, anions content). Most of objects analyzed showed a coarse and inhomogeneous corroded structure, often made up of several corrosion layers. This has been explained by the silt nature of soils, which contain low amount of clay and are, therefore, quite accessible to water and air. The main cause of a high dissolution rate of bronze alloys is the alternate water saturation and instauration of the soil, for example on a seasonal scale. Moreover, due to the vicinity of the Black Sea, the detrimental effect of chlorine has been evidenced for few objects, which were affected by the bronze disease. A general classification of corrosion layers was achieved by comparing values of the ratio Cu/Sn in the alloy and in the patina. Decuprification is a general trend, and enrichment of copper within the corrosion layers, due to the formation of thick layers of cuprite (Cu2O), is pointed out as well. Uncommon corrosion products and degradation patterns were presented as well, and they are probably due to peculiar local conditions taking place during the burial time, such as anaerobic conditions or fluctuating environmental conditions. In order to acquire a better insight into the corrosion mechanisms, the second part of the thesis has regarded simulation experiments, which were conducted on commercial Cu-Sn alloys, whose composition resembles those of ancient artefacts one. Electrochemical measurements were conducted in natural electrolytes, such as solutions extracted from natural soil (sampled at the archaeological sites) and seawater. Cyclic potentiodynamic experiments allowed appreciating the mechanism of corrosion in both cases. Soil extract’s electrolyte has been evaluated being a non aggressive medium, while artificial solution prepared by increasing the concentration of anions caused the pitting corrosion of the alloy, which is demonstrated by optical observations. In particular, electrochemical impedance spectroscopy allows assessing qualitatively the nature of corroded structures formed in soil and seawater. A double-structured layer is proposed, which differ, in the two cases, for the nature of the internal passive layer, which result defectiveness and porous in case of seawater.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
Automatischen Sortiersysteme (Sorter) besitzen in der Intralogistik eine große Bedeutung. Sorter erreichen eine ausdauernd hohe Sortierleistung bei gleichzeitig geringer Fehlsortierrate und bilden deshalb oft den zentralen Baustein in Materialflusssystemen mit hoher Umschlagsrate. Distributionszentren mit Lager und Kommissionierfunktion sind typische Vertreter solcher Materialflusssysteme. Ein Sorter besteht aus den Subsystemen Einschleusung, Verteilförderer und Endstellen. Die folgenden Betrachtungen fokussieren auf ein Sortermodell mit einem Verteilförderer in Ringstruktur und einer Einzelplatzbelegung. Auf jedem Platz kann genau ein Gut transportiert werden. Der Verteilförderer besitzt somit eine feste Transportkapazität. Derartige Förderer werden in der Regel als Kippschalen- oder Quergurt-Sorter ausgeführt. Die theoretische Sortierleistung für diesen Sortertyp kann aus Fahrgeschwindigkeit und Transportplatzabstand bestimmt werden. Diese Systemleistung wird im praktischen Betrieb kaum erreicht. Verschiedene Faktoren im Einschleusbereich und im Ausschleusbereich führen zu einer Leistungsminderung. Betrachtungen zur Bestimmung der mittleren Warteschlangenlänge im Einschleusbereich sowie zur Ermittlung des Rundläuferanteils auf dem Verteilförderer werden im folgenden Beitrag vorgestellt. Diesem Beitrag liegt ein Forschungsvorhaben zugrunde, das aus Mitteln des Bundesministeriums für Wirtschaft und Technologie (BMWi) über die Arbeitsgemeinschaft industrieller Forschungsvereinigungen "Otto von Guericke" (AiF) gefördert und im Auftrage der Bundesvereinigung Logistik e.V. (BVL) ausgeführt wurde.
Resumo:
We study state-based video communication where a client simultaneously informs the server about the presence status of various packets in its buffer. In sender-driven transmission, the client periodically sends to the server a single acknowledgement packet that provides information about all packets that have arrived at the client by the time the acknowledgment is sent. In receiver-driven streaming, the client periodically sends to the server a single request packet that comprises a transmission schedule for sending missing data to the client over a horizon of time. We develop a comprehensive optimization framework that enables computing packet transmission decisions that maximize the end-to-end video quality for the given bandwidth resources, in both prospective scenarios. The core step of the optimization comprises computing the probability that a single packet will be communicated in error as a function of the expected transmission redundancy (or cost) used to communicate the packet. Through comprehensive simulation experiments, we carefully examine the performance advances that our framework enables relative to state-of-the-art scheduling systems that employ regular acknowledgement or request packets. Consistent gains in video quality of up to 2B are demonstrated across a variety of content types. We show that there is a direct analogy between the error-cost efficiency of streaming a single packet and the overall rate-distortion performance of streaming the whole content. In the case of sender-driven transmission, we develop an effective modeling approach that accurately characterizes the end-to-end performance as a function of the packet loss rate on the backward channel and the source encoding characteristics.
Resumo:
Numerical simulation experiments give insight into the evolving energy partitioning during high-strain torsion experiments of calcite. Our numerical experiments are designed to derive a generic macroscopic grain size sensitive flow law capable of describing the full evolution from the transient regime to steady state. The transient regime is crucial for understanding the importance of micro structural processes that may lead to strain localization phenomena in deforming materials. This is particularly important in geological and geodynamic applications where the phenomenon of strain localization happens outside the time frame that can be observed under controlled laboratory conditions. Ourmethod is based on an extension of the paleowattmeter approach to the transient regime. We add an empirical hardening law using the Ramberg-Osgood approximation and assess the experiments by an evolution test function of stored over dissipated energy (lambda factor). Parameter studies of, strain hardening, dislocation creep parameter, strain rates, temperature, and lambda factor as well asmesh sensitivity are presented to explore the sensitivity of the newly derived transient/steady state flow law. Our analysis can be seen as one of the first steps in a hybrid computational-laboratory-field modeling workflow. The analysis could be improved through independent verifications by thermographic analysis in physical laboratory experiments to independently assess lambda factor evolution under laboratory conditions.
Resumo:
A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^
Resumo:
One of the difficulties in the practical application of ridge regression is that, for a given data set, it is unknown whether a selected ridge estimator has smaller squared error than the least squares estimator. The concept of the improvement region is defined, and a technique is developed which obtains approximate confidence intervals for the value of ridge k which produces the maximum reduction in mean squared error. Two simulation experiments were conducted to investigate how accurate these approximate confidence intervals might be. ^