871 resultados para Dynamic Model Averaging
Resumo:
Unorganized traffic is a generalized form of travel wherein vehicles do not adhere to any predefined lanes and can travel in-between lanes. Such travel is visible in a number of countries e.g. India, wherein it enables a higher traffic bandwidth, more overtaking and more efficient travel. These advantages are visible when the vehicles vary considerably in size and speed, in the absence of which the predefined lanes are near-optimal. Motion planning for multiple autonomous vehicles in unorganized traffic deals with deciding on the manner in which every vehicle travels, ensuring no collision either with each other or with static obstacles. In this paper the notion of predefined lanes is generalized to model unorganized travel for the purpose of planning vehicles travel. A uniform cost search is used for finding the optimal motion strategy of a vehicle, amidst the known travel plans of the other vehicles. The aim is to maximize the separation between the vehicles and static obstacles. The search is responsible for defining an optimal lane distribution among vehicles in the planning scenario. Clothoid curves are used for maintaining a lane or changing lanes. Experiments are performed by simulation over a set of challenging scenarios with a complex grid of obstacles. Additionally behaviours of overtaking, waiting for a vehicle to cross and following another vehicle are exhibited.
Resumo:
Biological models of an apoptotic process are studied using models describing a system of differential equations derived from reaction kinetics information. The mathematical model is re-formulated in a state-space robust control theory framework where parametric and dynamic uncertainty can be modelled to account for variations naturally occurring in biological processes. We propose to handle the nonlinearities using neural networks.
Resumo:
Building Information Modeling (BIM) is the process of structuring, capturing, creating, and managing a digital representation of physical and/or functional characteristics of a built space [1]. Current BIM has limited ability to represent dynamic semantics, social information, often failing to consider building activity, behavior and context; thus limiting integration with intelligent, built-environment management systems. Research, such as the development of Semantic Exchange Modules, and/or the linking of IFC with semantic web structures, demonstrates the need for building models to better support complex semantic functionality. To implement model semantics effectively, however, it is critical that model designers consider semantic information constructs. This paper discusses semantic models with relation to determining the most suitable information structure. We demonstrate how semantic rigidity can lead to significant long-term problems that can contribute to model failure. A sufficiently detailed feasibility study is advised to maximize the value from the semantic model. In addition we propose a set of questions, to be used during a model’s feasibility study, and guidelines to help assess the most suitable method for managing semantics in a built environment.
Resumo:
Some amendments are proposed to a recent redefinition of the mental model concept in system dynamics. First, externalised, or articulated mental models should not be called cognitive maps; this term has a well established, alternative meaning. Second, there can be mental models of entities not yet existing beyond an individual's mind; the modelling of planned or desired systems is possible and recommended. Third, saying that mental models maintain social systems connects with some exciting research opportunities for system dynamics; however, it is probably an accidental distraction from the intended meaning of the redefinition. These minor criticisms apart, the new definition of mental model of a dynamic system is welcomed as a useful contribution to both research and practice.
Resumo:
The suggestion is discussed that characteristic particle and field signatures at the dayside magnetopause, termed “flux transfer events” (FTEs), are, in at least some cases, due to transient solar wind and/or magnetosheath dynamic pressure increases, rather than time-dependent magnetic reconnection. It is found that most individual cases of FTEs observed by a single spacecraft can, at least qualitatively, be explained by the pressure pulse model, provided a few rather unsatisfactory features of the predictions are explained in terms of measurement uncertainties. The most notable exceptions to this are some “two-regime” observations made by two satellites simultaneously, one on either side of the magnetopause. However, this configuration has not been frequently achieved for sufficient time, such observations are rare, and the relevant tests are still not conclusive. The strongest evidence that FTEs are produced by magnetic reconnection is the dependence of their occurrence on the north-south component of the interplanetary magnetic field (IMF) or of the magnetosheath field. The pressure pulse model provides an explanation for this dependence (albeit qualitative) in the case of magnetosheath FTEs, but this does not apply to magnetosphere FTEs. The only surveys of magnetosphere FTEs have not employed the simultaneous IMF, but have shown that their occurrence is strongly dependent on the north-south component of the magnetosheath field, as observed earlier/later on the same magnetopause crossing (for inbound/outbound passes, respectively). This paper employs statistics on the variability of the IMF orientation to investigate the effects of IMF changes between the times of the magnetosheath and FTE observations. It is shown that the previously published results are consistent with magnetospheric FTEs being entirely absent when the magnetosheath field is northward: all crossings with magnetosphere FTEs and a northward field can be attributed to the field changing sense while the satellite was within the magnetosphere (but close enough to the magnetopause to detect an FTE). Allowance for the IMF variability also makes the occurrence frequency of magnetosphere FTEs during southward magnetosheath fields very similar to that observed for magnetosheath FTEs. Conversely, the probability of attaining the observed occurrence frequencies for the pressure pulse model is 10−14. In addition, it is argued that some magnetosheath FTEs should, for the pressure pulse model, have been observed for northward IMF: the probability that the number is as low as actually observed is estimated to be 10−10. It is concluded that although the pressure model can be invoked to qualitatively explain a large number of individual FTE observations, the observed occurrence statistics are in gross disagreement with this model.
Resumo:
The generation of flow and current vortices in the dayside auroral ionosphere has been predicted for two processes ocurring at the dayside magnetopause. The first of these mechanisms is time-dependent magnetic reconnection, in “flux transfer events” (FTEs); the second is the action of solar wind dynamic pressure changes. The ionospheric flow signature of an FTE should be a twin vortex, with the mean flow velocity in the central region of the pattern equal to the velocity of the pattern as a whole. On the other hand, a pulse of enhanced or reduced dynamic pressure is also expected to produce a twin vortex, but with the central plasma flow being generally different in speed from, and almost orthogonal to, the motion of the whole pattern. In this paper, we make use of this distinction to discuss recent observations of vortical flow patterns in the dayside auroral ionosphere in terms of one or other of the proposed mechanisms. We conclude that some of the observations reported are consistent only with the predicted signature of FTEs. We then evaluate the dimensions of the open flux tubes required to explain some recent simultaneous radar and auroral observations and infer that they are typically 300 km in north–south extent but up to 2000 km in longitudinal extent (i.e., roughly 5 hours of MLT). Hence these observations suggest that recent theories of FTEs which invoke time-varying reconnection at an elongated neutral line may be correct. We also present some simultaneous observations of the interplanetary magnetic field (IMF) and solar wind dynamic pressure (observed using the IMP8 satellite) and the ionospheric flow (observed using the EISCAT radar) which are also only consistent with the FTE model. We estimate that for continuously southward IMF (
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
Resumo:
We used a light-use efficiency model of photosynthesis coupled with a dynamic carbon allocation and tree-growth model to simulate annual growth of the gymnosperm Callitris columellaris in the semi-arid Great Western Woodlands, Western Australia, over the past 100 years. Parameter values were derived from independent observations except for sapwood specific respiration rate, fine-root turnover time, fine-root specific respiration rate and the ratio of fine-root mass to foliage area, which were estimated by Bayesian optimization. The model reproduced the general pattern of interannual variability in radial growth (tree-ring width), including the response to the shift in precipitation regimes that occurred in the 1960s. Simulated and observed responses to climate were consistent. Both showed a significant positive response of tree-ring width to total photosynthetically active radiation received and to the ratio of modeled actual to equilibrium evapotranspiration, and a significant negative response to vapour pressure deficit. However, the simulations showed an enhancement of radial growth in response to increasing atmospheric CO2 concentration (ppm) ([CO2]) during recent decades that is not present in the observations. The discrepancy disappeared when the model was recalibrated on successive 30-year windows. Then the ratio of fine-root mass to foliage area increases by 14% (from 0.127 to 0.144 kg C m-2) as [CO2] increased while the other three estimated parameters remained constant. The absence of a signal of increasing [CO2] has been noted in many tree-ring records, despite the enhancement of photosynthetic rates and water-use efficiency resulting from increasing [CO2]. Our simulations suggest that this behaviour could be explained as a consequence of a shift towards below-ground carbon allocation.
Resumo:
Climate controls fire regimes through its influence on the amount and types of fuel present and their dryness. CO2 concentration constrains primary production by limiting photosynthetic activity in plants. However, although fuel accumulation depends on biomass production, and hence on CO2 concentration, the quantitative relationship between atmospheric CO2 concentration and biomass burning is not well understood. Here a fire-enabled dynamic global vegetation model (the Land surface Processes and eXchanges model, LPX) is used to attribute glacial–interglacial changes in biomass burning to an increase in CO2, which would be expected to increase primary production and therefore fuel loads even in the absence of climate change, vs. climate change effects. Four general circulation models provided last glacial maximum (LGM) climate anomalies – that is, differences from the pre-industrial (PI) control climate – from the Palaeoclimate Modelling Intercomparison Project Phase~2, allowing the construction of four scenarios for LGM climate. Modelled carbon fluxes from biomass burning were corrected for the model's observed prediction biases in contemporary regional average values for biomes. With LGM climate and low CO2 (185 ppm) effects included, the modelled global flux at the LGM was in the range of 1.0–1.4 Pg C year-1, about a third less than that modelled for PI time. LGM climate with pre-industrial CO2 (280 ppm) yielded unrealistic results, with global biomass burning fluxes similar to or even greater than in the pre-industrial climate. It is inferred that a substantial part of the increase in biomass burning after the LGM must be attributed to the effect of increasing CO2 concentration on primary production and fuel load. Today, by analogy, both rising CO2 and global warming must be considered as risk factors for increasing biomass burning. Both effects need to be included in models to project future fire risks.
Resumo:
The detection of anthropogenic climate change can be improved by recognising the seasonality in the climate change response. This is demonstrated for the North Atlantic jet (zonal wind at 850 hPa, U850) and European precipitation responses projected by the CMIP5 climate models. The U850 future response is characterised by a marked seasonality: an eastward extension of the North Atlantic jet into Europe in November-April, and a poleward shift in May-October. Under the RCP8.5 scenario, the multi-model mean response in U850 in these two extended seasonal means emerges by 2035-2040 for the lower--latitude features and by 2050-2070 for the higher--latitude features, relative to the 1960-1990 climate. This is 5-15 years earlier than when evaluated in the traditional meteorological seasons (December--February, June--August), and it results from an increase in the signal to noise ratio associated with the spatial coherence of the response within the extended seasons. The annual mean response lacks important information on the seasonality of the response without improving the signal to noise ratio. The same two extended seasons are demonstrated to capture the seasonality of the European precipitation response to climate change and to anticipate its emergence by 10-20 years. Furthermore, some of the regional responses, such as the Mediterranean precipitation decline and the U850 response in North Africa in the extended winter, are projected to emerge by 2020-2025, according to the models with a strong response. Therefore, observations might soon be useful to test aspects of the atmospheric circulation response predicted by some of the CMIP5 models.
Resumo:
Dynamic soundtracking presents various practical and aesthetic challenges to composers working with games. This paper presents an implementation of a system addressing some of these challenges with an affectively-driven music generation algorithm based on a second order Markov-model. The system can respond in real-time to emotional trajectories derived from 2-dimensions of affect on the circumplex model (arousal and valence), which are mapped to five musical parameters. A transition matrix is employed to vary the generated output in continuous response to the affective state intended by the gameplay.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
We test the ability of a two-dimensional flux model to simulate polynya events with narrow open-water zones by comparing model results to ice-thickness and ice-production estimates derived from thermal infrared Moderate Resolution Imaging Spectroradiometer (MODIS) observations in conjunction with an atmospheric dataset. Given a polynya boundary and an atmospheric dataset, the model correctly reproduces the shape of an 11 day long event, using only a few simple conservation laws. Ice production is slightly overestimated by the model, owing to an underestimated ice thickness. We achieved best model results with the consolidation thickness parameterization developed by Biggs and others (2000). Observed regional discrepancies between model and satellite estimates might be a consequence of the missing representation of the dynamic of the thin-ice thickening (e.g. rafting). We conclude that this simplified polynya model is a valuable tool for studying polynya dynamics and estimating associated fluxes of single polynya events.
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.
Resumo:
Yellow passion fruit pulp is unstable, presenting phase separation that can be avoided by the addition of hydrocolloids. For this purpose, xanthan and guar gum [0.3, 0.7 and 1.0% (w/w)] were added to yellow passion fruit pulp and the changes in the dynamic and steady-shear rheological behavior evaluated. Xanthan dispersions showed a more pronounced pseudoplasticity and the presence of yield stress, which was not observed in the guar gum dispersions. Cross model fitting to flow curves showed that the xanthan suspensions also had higher zero shear viscosity than the guar suspensions, and, for both gums, an increase in temperature led to lower values for this parameter. The gums showed different behavior as a function of temperature in the range of 5-35 degrees C. The activation energy of the apparent viscosity was dependent on the shear rate and gum concentration for guar, whereas for xanthan these values only varied with the concentration. The mechanical spectra were well described by the generalized Maxwell model and the xanthan dispersions showed a more elastic character than the guar dispersions, with higher values for the relaxation time. Xanthan was characterized as a weak gel, while guar presented a concentrated solution behavior. The simultaneous evaluation of temperature and concentration showed a stronger influence of the polysaccharide concentration on the apparent viscosity and the G` and G `` moduli than the variation in temperature.