395 resultados para Generative Modelling
Resumo:
Stereoscopic white-light imaging of a large portion of the inner heliosphere has been used to track interplanetary coronal mass ejections. At large elongations from the Sun, the white-light brightness depends on both the local electron density and the efficiency of the Thomson-scattering process. To quantify the effects of the Thomson-scattering geometry, we study an interplanetary shock using forward magnetohydrodynamic simulation and synthetic white-light imaging. Identifiable as an inclined streak of enhanced brightness in a time–elongation map, the travelling shock can be readily imaged by an observer located within a wide range of longitudes in the ecliptic. Different parts of the shock front contribute to the imaged brightness pattern viewed by observers at different longitudes. Moreover, even for an observer located at a fixed longitude, a different part of the shock front will contribute to the imaged brightness at any given time. The observed brightness within each imaging pixel results from a weighted integral along its corresponding ray-path. It is possible to infer the longitudinal location of the shock from the brightness pattern in an optical sky map, based on the east–west asymmetry in its brightness and degree of polarisation. Therefore, measurement of the interplanetary polarised brightness could significantly reduce the ambiguity in performing three-dimensional reconstruction of local electron density from white-light imaging.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
Export coefficient modelling was used to model the impact of agriculture on nitrogen and phosphorus loading on the surface waters of two contrasting agricultural catchments. The model was originally developed for the Windrush catchment where the highly reactive Jurassic limestone aquifer underlying the catchment is well connected to the surface drainage network, allowing the system to be modelled using uniform export coefficients for each nutrient source in the catchment, regardless of proximity to the surface drainage network. In the Slapton catchment, the hydrological path-ways are dominated by surface and lateral shallow subsurface flow, requiring modification of the export coefficient model to incorporate a distance-decay component in the export coefficients. The modified model was calibrated against observed total nitrogen and total phosphorus loads delivered to Slapton Ley from inflowing streams in its catchment. Sensitivity analysis was conducted to isolate the key controls on nutrient export in the modified model. The model was validated against long-term records of water quality, and was found to be accurate in its predictions and sensitive to both temporal and spatial changes in agricultural practice in the catchment. The model was then used to forecast the potential reduction in nutrient loading on Slapton Ley associated with a range of catchment management strategies. The best practicable environmental option (BPEO) was found to be spatial redistribution of high nutrient export risk sources to areas of the catchment with the greatest intrinsic nutrient retention capacity.
Resumo:
A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley
Resumo:
Steady state and dynamic models have been developed and applied to the River Kennet system. Annual nitrogen exports from the land surface to the river have been estimated based on land use from the 1930s and the 1990s. Long term modelled trends indicate that there has been a large increase in nitrogen transport into the river system driven by increased fertiliser application associated with increased cereal production, increased population and increased livestock levels. The dynamic model INCA Integrated Nitrogen in Catchments. has been applied to simulate the day-to-day transport of N from the terrestrial ecosystem to the riverine environment. This process-based model generates spatial and temporal data and reproduces the observed instream concentrations. Applying the model to current land use and 1930s land use indicates that there has been a major shift in the short term dynamics since the 1930s, with increased river and groundwater concentrations caused by both non-point source pollution from agriculture and point source discharges. �
Resumo:
Nitrogen and phosphorus losses from the catchment of Slapton Ley, a small coastal lake in SW England, were calculated using an adaptation of a model developed by Jorgensen (1980). A detailed survey of the catchment revealed that its land use is dominated by both permanent and temporary grassland (respectively 38 and 32% of its total area), and that the remainder is made up of the cultivation of cereals and field vegetables, and market gardening. Livestock numbers in the catchment constitute ca. 6600 head of cattle, 10,000 sheep, 590 pigs, 1700 poultry and 58 horses. The permanent human population of the area is ca. 2000, served by two small gravity-fed sewage treatment works (STWs). Inputs to, and losses from, farmland in the catchment were computed using Jorgensen’s model, and coefficients derived from the data of Cooke (1976), Gostick (1982), Rast and Lee (1983) and Vollenweider (1968). Allowing for outputs from STWs, the total annual external load of N and P upon Slapton Ley is 160 t (35 kg ha-1) a-1 N, and 4.8 t (1.05 kg ha-1) a-1 P. Accordingly to Vollenweider (1968, 1975), such loadings exceed OECD permissible level by a factor of ca. 50 in the case of N, and ca. 5 in that of P. In order to reduce nutrient loads, attention would need to be paid to both STW and agricultural sources.
Resumo:
The evidence provided by modelled assessments of future climate impact on flooding is fundamental to water resources and flood risk decision making. Impact models usually rely on climate projections from global and regional climate models (GCM/RCMs). However, challenges in representing precipitation events at catchment-scale resolution mean that decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs. Here the impacts on projected high flows of differing ensemble approaches and application of Model Output Statistics to RCM precipitation are evaluated while assessing climate change impact on flood hazard in the Upper Severn catchment in the UK. Various ensemble projections are used together with the HBV hydrological model with direct forcing and also compared to a response surface technique. We consider an ensemble of single-model RCM projections from the current UK Climate Projections (UKCP09); multi-model ensemble RCM projections from the European Union's FP6 ‘ENSEMBLES’ project; and a joint probability distribution of precipitation and temperature from a GCM-based perturbed physics ensemble. The ensemble distribution of results show that flood hazard in the Upper Severn is likely to increase compared to present conditions, but the study highlights the differences between the results from different ensemble methods and the strong assumptions made in using Model Output Statistics to produce the estimates of future river discharge. The results underline the challenges in using the current generation of RCMs for local climate impact studies on flooding. Copyright © 2012 Royal Meteorological Society
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
In this article the author discusses participative modelling in system dynamics and issues underlying it. It states that in the heart of system dynamics is the servo-mechanism theory. It argues that it is wrong to see an optimal solution being applied by the empowered parties just because it exhibits self-evident truth and an analysis is not enough to encourage people to do things in different way. It mentions other models including the simulation models used for developing strategy discussions.
Resumo:
Tetrafluoromethane, CF4, is powerful greenhouse gas, and the possibility of storing it in microporous carbon has been widely studied. In this paper we show, for the first time, that the results of molecular simulations can be very helpful in the study of CF4 adsorption. Moreover, experimental data fit to the results collected from simulations. We explain the meaning of the empirical parameters of the supercritical Dubinin–Astakhov model proposed by Ozawa and finally the meaning of the parameter k of the empirical relation proposed by Amankwah and Schwarz.
Resumo:
Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.
Resumo:
High spatial resolution environmental data gives us a better understanding of the environmental factors affecting plant distributions at fine spatial scales. However, large environmental datasets dramatically increase compute times and output species model size stimulating the need for an alternative computing solution. Cluster computing offers such a solution, by allowing both multiple plant species Environmental Niche Models (ENMs) and individual tiles of high spatial resolution models to be computed concurrently on the same compute cluster. We apply our methodology to a case study of 4,209 species of Mediterranean flora (around 17% of species believed present in the biome). We demonstrate a 16 times speed-up of ENM computation time when 16 CPUs were used on the compute cluster. Our custom Java ‘Merge’ and ‘Downsize’ programs reduce ENM output files sizes by 94%. The median 0.98 test AUC score of species ENMs is aided by various species occurrence data filtering techniques. Finally, by calculating the percentage change of individual grid cell values, we map the projected percentages of plant species vulnerable to climate change in the Mediterranean region between 1950–2000 and 2020.
Resumo:
Abstract: Long-term exposure of skylarks to a fictitious insecticide and of wood mice to a fictitious fungicide were modelled probabilistically in a Monte Carlo simulation. Within the same simulation the consequences of exposure to pesticides on reproductive success were modelled using the toxicity-exposure-linking rules developed by R.S. Bennet et al. (2005) and the interspecies extrapolation factors suggested by R. Luttik et al.(2005). We built models to reflect a range of scenarios and as a result were able to show how exposure to pesticide might alter the number of individuals engaged in any given phase of the breeding cycle at any given time and predict the numbers of new adults at the season’s end.