969 resultados para Simulation experiments
Resumo:
In modeling expectation formation, economic agents are usually viewed as forming expectations adaptively or in accordance with some rationality postulate. We offer an alternative nonlinear model where agents exchange their opinions and information with each other. Such a model yields multiple equilibria, or attracting distributions, that are persistent but subject to sudden large jumps. Using German Federal Statistical Office economic indicators and German IFO Poll expectational data, we show that this kind of model performs well in simulation experiments. Focusing upon producers' expectations in the consumption goods sector, we also discover evidence that structural change in the interactive process occurred over the period of investigation (1970-1998). Specifically, interactions in expectation formation seem to have become less important over time.
Resumo:
Two stock-market simulation experiments investigated the notion that rumors that invoke stable-cause attributions spawn illusory associations and less regressive predictions and behavior. In Study 1, illusory perceptions of association and stable causation (rumors caused price changes on the day after they appeared) existed despite rigorous conditions of nonassociation (price changes were unrelated to rumors). Predictions (recent price trends will continue) and trading behavior (departures from a strong buy-low-sell-high strategy) were both anti-regressive. In Study 2, stability of attribution was manipulated via a computerized tutorial. Participants taught to view price-changes as caused by stable forces predicted less regressively and departed more from buy-low-sell-high trading patterns than those taught to perceive changes as caused by unstable forces. Results inform a social cognitive and decision theoretic understanding of rumor by integrating it with causal attribution, covariation detection, and prediction theory. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
Este trabalho foi realizado com o objetivo de desenvolver um modelo computacional para simular a secagem de frutos café em um secador intermitente de fluxos contracorrente, empregando a linguagem de simulação EXTEND™ e o Modelo de Thompson (THOMPSON; PEART; FOSTER, 1968). Para validação do modelo desenvolvido foram utilizados dados experimentais obtidos por Silva (1991), em que foram empregados três níveis de temperatura do ar de secagem de 60, 80 e 100 °C. O modelo desenvolvido foi validado, sendo constatados desvios absolutos de 1,8% b.u e 1,1 kg e erros relativos de 11% e 1,6% na previsão dos parâmetros teor de água final e consumo de lenha, respectivamente. O modelo validado foi empregado na condução de experimentos tipo comparação de cenários. O primeiro experimento refere a alterações do ciclo operacional em que foram alterados os tempos de movimentação e de parada do fluxo da massa de grãos. E o segundo refere à alteração da configuração do secador quanto às alturas das câmaras de secagem e descanso. O ciclo operacional com os tempos de movimentação de um minuto e de parada de dezesseis minutos, para a temperatura do ar de secagem de 100 °C, proporcionou o melhor desempenho, sendo constatado tempo secagem de 12,3 h, consumo de lenha de 109,5 kg, consumo específico de energia de 7660 kJ.kg-1 de água evaporada, e capacidade de secagem de 87,86 kg.h-1. Quanto à configuração do secador, o melhor desempenho ocorreu para altura da câmara de secagem de 2,3 m usando a temperatura do ar de secagem de 100 °C, em que foram simulados tempo de secagem de 12,0 h, consumo de lenha de 106,5 kg, consumo específico de energia, de 7433 kJ.kg-1 de água evaporada, e capacidade de secagem de 90 kg.h-1. Desse modo, na condução da secagem de frutos de café em um secador intermitente de fluxos contracorrentes é recomendado o ciclo operacional com tempos de movimentação de um minuto e o de parada de dezesseis minutos, e não empregar a câmara de descanso. Essa conclusão está fundamentada em índices de desempenho do secador. Ressalta-se que não foram simulados os impactos nos parâmetros de qualidade.
Resumo:
This article studies several Fractional Order Control algorithms used for joint control of a hexapod robot. Both Padé and series approximations to the fractional derivative are considered for the control algorithm. The walking performance is evaluated through two indices: The mean absolute density of energy used per unit distance travelled, and the control effort. A set of simulation experiments reveals the influence of the different approximations upon the proposed indices. The results show that the fractional proportional and derivative algorithm, implemented using the Padé approximation with a small number of terms, gives the best results.
Resumo:
6th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Catania, Italy, 17-19 September
Resumo:
En l'article s'analitza prèviament l'estat de l'art de la gestió de l'ample de banda en entorns educatius, presentant en base a diverses classificacions anteriors solucions i experiències proposades. Amb la proposta presentada, mitjançant els experiments de simulació efectuats i els tests en entorns reals es tracta de comprovar-ne el correcte comportament, demostrant la utilitat de la mateixa alhora de fer la gestió de l'ample de banda dels centres.
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
The impact of charcoal production on soil hydraulic properties, runoff response and erosion susceptibility were studied in both field and simulation experiments. Core and composite samples, from 12 randomly selected sites within the catchment of Kotokosu were taken from the 0-10 cm layer of a charcoal site soil (CSS) and adjacent field soils (AFS). These samples were used to determine saturated hydraulic conductivity (Ksat), bulk density, total porosity, soil texture and color. Infiltration, surface albedo and soil surface temperature were also measured in both CSS and AFS. Measured properties were used as entries in a rainfall runoff simulation experiment on a smooth (5 % slope) plot of 25 x 25 m grids with 10 cm resolutions. Typical rainfall intensities of the study watershed (high, moderate and low) were applied to five different combinations of Ks distributions that could be expected in this landscape. The results showed significantly (p < 0.01) higher flow characteristics of the soil under charcoal kilns (increase of 88 %). Infiltration was enhanced and runoff volume reduced significantly. The results showed runoff reduction of about 37 and 18 %, and runoff coefficient ranging from 0.47-0.75 and 0.04-0.39 or simulation based on high (200 mm h-1) and moderate (100 mm h-1) rainfall events over the CSS and AFS areas, respectively. Other potential impacts of charcoal production on watershed hydrology were described. The results presented, together with watershed measurements, when available, are expected to enhance understanding of the hydrological responses of ecosystems to indiscriminate charcoal production and related activities in this region.
Resumo:
The loss of biodiversity has become a matter of urgent concern and a better understanding of local drivers is crucial for conservation. Although environmental heterogeneity is recognized as an important determinant of biodiversity, this has rarely been tested using field data at management scale. We propose and provide evidence for the simple hypothesis that local species diversity is related to spatial environmental heterogeneity. Species partition the environment into habitats. Biodiversity is therefore expected to be influenced by two aspects of spatial heterogeneity: 1) the variability of environmental conditions, which will affect the number of types of habitat, and 2) the spatial configuration of habitats, which will affect the rates of ecological processes, such as dispersal or competition. Earlier, simulation experiments predicted that both aspects of heterogeneity will influence plant species richness at a particular site. For the first time, these predictions were tested for plant communities using field data, which we collected in a wooded pasture in the Swiss Jura mountains using a four-level hierarchical sampling design. Richness generally increased with increasing environmental variability and "roughness" (i.e. decreasing spatial aggregation). Effects occurred at all scales, but the nature of the effect changed with scale, suggesting a change in the underlying mechanisms, which will need to be taken into account if scaling up to larger landscapes. Although we found significant effects of environmental heterogeneity, other factors such as history could also be important determinants. If a relationship between environmental heterogeneity and species richness can be shown to be general, recently available high-resolution environmental data can be used to complement the assessment of patterns of local richness and improve the prediction of the effects of land use change based on mean site conditions or land use history.
Resumo:
Electrical impedance tomography (EIT) allows the measurement of intra-thoracic impedance changes related to cardiovascular activity. As a safe and low-cost imaging modality, EIT is an appealing candidate for non-invasive and continuous haemodynamic monitoring. EIT has recently been shown to allow the assessment of aortic blood pressure via the estimation of the aortic pulse arrival time (PAT). However, finding the aortic signal within EIT image sequences is a challenging task: the signal has a small amplitude and is difficult to locate due to the small size of the aorta and the inherent low spatial resolution of EIT. In order to most reliably detect the aortic signal, our objective was to understand the effect of EIT measurement settings (electrode belt placement, reconstruction algorithm). This paper investigates the influence of three transversal belt placements and two commonly-used difference reconstruction algorithms (Gauss-Newton and GREIT) on the measurement of aortic signals in view of aortic blood pressure estimation via EIT. A magnetic resonance imaging based three-dimensional finite element model of the haemodynamic bio-impedance properties of the human thorax was created. Two simulation experiments were performed with the aim to (1) evaluate the timing error in aortic PAT estimation and (2) quantify the strength of the aortic signal in each pixel of the EIT image sequences. Both experiments reveal better performance for images reconstructed with Gauss-Newton (with a noise figure of 0.5 or above) and a belt placement at the height of the heart or higher. According to the noise-free scenarios simulated, the uncertainty in the analysis of the aortic EIT signal is expected to induce blood pressure errors of at least ± 1.4 mmHg.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.
Resumo:
Die gegenwärtige Entwicklung der internationalen Klimapolitik verlangt von Deutschland eine Reduktion seiner Treibhausgasemissionen. Wichtigstes Treibhausgas ist Kohlendioxid, das durch die Verbrennung fossiler Energieträger in die Atmosphäre freigesetzt wird. Die Reduktionsziele können prinzipiell durch eine Verminderung der Emissionen sowie durch die Schaffung von Kohlenstoffsenken erreicht werden. Senken beschreiben dabei die biologische Speicherung von Kohlenstoff in Böden und Wäldern. Eine wichtige Einflussgröße auf diese Prozesse stellt die räumliche Dynamik der Landnutzung einer Region dar. In dieser Arbeit wird das Modellsystem HILLS entwickelt und zur Simulation dieser komplexen Wirkbeziehungen im Bundesland Hessen genutzt. Ziel ist es, mit HILLS über eine Analyse des aktuellen Zustands hinaus auch Szenarien über Wege der zukünftigen regionalen Entwicklung von Landnutzung und ihrer Wirkung auf den Kohlenstoffhaushalt bis 2020 zu untersuchen. Für die Abbildung der räumlichen und zeitlichen Dynamik von Landnutzung in Hessen wird das Modell LUCHesse entwickelt. Seine Aufgabe ist die Simulation der relevanten Prozesse auf einem 1 km2 Raster, wobei die Raten der Änderung exogen als Flächentrends auf Ebene der hessischen Landkreise vorgegeben werden. LUCHesse besteht aus Teilmodellen für die Prozesse: (A) Ausbreitung von Siedlungs- und Gewerbefläche, (B) Strukturwandel im Agrarsektor sowie (C) Neuanlage von Waldflächen (Aufforstung). Jedes Teilmodell umfasst Methoden zur Bewertung der Standorteignung der Rasterzellen für unterschiedliche Landnutzungsklassen und zur Zuordnung der Trendvorgaben zu solchen Rasterzellen, die jeweils am besten für eine Landnutzungsklasse geeignet sind. Eine Validierung der Teilmodelle erfolgt anhand von statistischen Daten für den Zeitraum von 1990 bis 2000. Als Ergebnis eines Simulationslaufs werden für diskrete Zeitschritte digitale Karten der Landnutzugsverteilung in Hessen erzeugt. Zur Simulation der Kohlenstoffspeicherung wird eine modifizierte Version des Ökosystemmodells Century entwickelt (GIS-Century). Sie erlaubt einen gesteuerten Simulationslauf in Jahresschritten und unterstützt die Integration des Modells als Komponente in das HILLS Modellsystem. Es werden verschiedene Anwendungsschemata für GIS-Century entwickelt, mit denen die Wirkung der Stilllegung von Ackerflächen, der Aufforstung sowie der Bewirtschaftung bereits bestehender Wälder auf die Kohlenstoffspeicherung untersucht werden kann. Eine Validierung des Modells und der Anwendungsschemata erfolgt anhand von Feld- und Literaturdaten. HILLS implementiert eine sequentielle Kopplung von LUCHesse mit GIS-Century. Die räumliche Kopplung geschieht dabei auf dem 1 km2 Raster, die zeitliche Kopplung über die Einführung eines Landnutzungsvektors, der die Beschreibung der Landnutzungsänderung einer Rasterzelle während des Simulationszeitraums enthält. Außerdem integriert HILLS beide Modelle über ein dienste- und datenbankorientiertes Konzept in ein Geografisches Informationssystem (GIS). Auf diesem Wege können die GIS-Funktionen zur räumlichen Datenhaltung und Datenverarbeitung genutzt werden. Als Anwendung des Modellsystems wird ein Referenzszenario für Hessen mit dem Zeithorizont 2020 berechnet. Das Szenario setzt im Agrarsektor eine Umsetzung der AGENDA 2000 Politik voraus, die in großem Maße zu Stilllegung von Ackerflächen führt, während für den Bereich Siedlung und Gewerbe sowie Aufforstung die aktuellen Trends der Flächenausdehnung fortgeschrieben werden. Mit HILLS ist es nun möglich, die Wirkung dieser Landnutzungsänderungen auf die biologische Kohlenstoffspeicherung zu quantifizieren. Während die Ausdehnung von Siedlungsflächen als Kohlenstoffquelle identifiziert werden kann (37 kt C/a), findet sich die wichtigste Senke in der Bewirtschaftung bestehender Waldflächen (794 kt C/a). Weiterhin führen die Stilllegung von Ackerfläche (26 kt C/a) sowie Aufforstung (29 kt C/a) zu einer zusätzlichen Speicherung von Kohlenstoff. Für die Kohlenstoffspeicherung in Böden zeigen die Simulationsexperimente sehr klar, dass diese Senke nur von beschränkter Dauer ist.
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
We developed a stochastic simulation model incorporating most processes likely to be important in the spread of Phytophthora ramorum and similar diseases across the British landscape (covering Rhododendron ponticum in woodland and nurseries, and Vaccinium myrtillus in heathland). The simulation allows for movements of diseased plants within a realistically modelled trade network and long-distance natural dispersal. A series of simulation experiments were run with the model, representing an experiment varying the epidemic pressure and linkage between natural vegetation and horticultural trade, with or without disease spread in commercial trade, and with or without inspections-with-eradication, to give a 2 x 2 x 2 x 2 factorial started at 10 arbitrary locations spread across England. Fifty replicate simulations were made at each set of parameter values. Individual epidemics varied dramatically in size due to stochastic effects throughout the model. Across a range of epidemic pressures, the size of the epidemic was 5-13 times larger when commercial movement of plants was included. A key unknown factor in the system is the area of susceptible habitat outside the nursery system. Inspections, with a probability of detection and efficiency of infected-plant removal of 80% and made at 90-day intervals, reduced the size of epidemics by about 60% across the three sectors with a density of 1% susceptible plants in broadleaf woodland and heathland. Reducing this density to 0.1% largely isolated the trade network, so that inspections reduced the final epidemic size by over 90%, and most epidemics ended without escape into nature. Even in this case, however, major wild epidemics developed in a few percent of cases. Provided the number of new introductions remains low, the current inspection policy will control most epidemics. However, as the rate of introduction increases, it can overwhelm any reasonable inspection regime, largely due to spread prior to detection. (C) 2009 Elsevier B.V. All rights reserved.