97 resultados para Andersen and Newman model
Resumo:
Data assimilation is predominantly used for state estimation; combining observational data with model predictions to produce an updated model state that most accurately approximates the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. Even with perfect initial data, inaccurate model parameters will lead to the growth of prediction errors. To generate reliable forecasts we need good estimates of both the current system state and the model parameters. This paper presents research into data assimilation methods for morphodynamic model state and parameter estimation. First, we focus on state estimation and describe implementation of a three dimensional variational(3D-Var) data assimilation scheme in a simple 2D morphodynamic model of Morecambe Bay, UK. The assimilation of observations of bathymetry derived from SAR satellite imagery and a ship-borne survey is shown to significantly improve the predictive capability of the model over a 2 year run. Here, the model parameters are set by manual calibration; this is laborious and is found to produce different parameter values depending on the type and coverage of the validation dataset. The second part of this paper considers the problem of model parameter estimation in more detail. We explain how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state. This approach removes inefficiencies associated with manual calibration and enables more effective use of observational data. We outline the development of a novel hybrid sequential 3D-Var data assimilation algorithm for joint state-parameter estimation and demonstrate its efficacy using an idealised 1D sediment transport model. The results of this study are extremely positive and suggest that there is great potential for the use of data assimilation-based state-parameter estimation in coastal morphodynamic modelling.
Resumo:
Motivation: Modelling the 3D structures of proteins can often be enhanced if more than one fold template is used during the modelling process. However, in many cases, this may also result in poorer model quality for a given target or alignment method. There is a need for modelling protocols that can both consistently and significantly improve 3D models and provide an indication of when models might not benefit from the use of multiple target-template alignments. Here, we investigate the use of both global and local model quality prediction scores produced by ModFOLDclust2, to improve the selection of target-template alignments for the construction of multiple-template models. Additionally, we evaluate clustering the resulting population of multi- and single-template models for the improvement of our IntFOLD-TS tertiary structure prediction method. Results: We find that using accurate local model quality scores to guide alignment selection is the most consistent way to significantly improve models for each of the sequence to structure alignment methods tested. In addition, using accurate global model quality for re-ranking alignments, prior to selection, further improves the majority of multi-template modelling methods tested. Furthermore, subsequent clustering of the resulting population of multiple-template models significantly improves the quality of selected models compared with the previous version of our tertiary structure prediction method, IntFOLD-TS.
Resumo:
Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (originally a linear regression technique), is discussed. In lasso models, less important inputs are identified and the corresponding coefficient is set to zero, providing an efficient and automatic model reduction procedure. For the same reason, lasso models are particularly appealing for diagnostic purposes.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
The Intergovernmental Panel on Climate Change fourth assessment report, published in 2007 came to a more confident assessment of the causes of global temperature change than previous reports and concluded that ‘it is likely that there has been significant anthropogenic warming over the past 50 years averaged over each continent except Antarctica.’ Since then, warming over Antarctica has also been attributed to human influence, and further evidence has accumulated attributing a much wider range of climate changes to human activities. Such changes are broadly consistent with theoretical understanding, and climate model simulations, of how the planet is expected to respond. This paper reviews this evidence from a regional perspective to reflect a growing interest in understanding the regional effects of climate change, which can differ markedly across the globe. We set out the methodological basis for detection and attribution and discuss the spatial scales on which it is possible to make robust attribution statements. We review the evidence showing significant human-induced changes in regional temperatures, and for the effects of external forcings on changes in the hydrological cycle, the cryosphere, circulation changes, oceanic changes, and changes in extremes. We then discuss future challenges for the science of attribution. To better assess the pace of change, and to understand more about the regional changes to which societies need to adapt, we will need to refine our understanding of the effects of external forcing and internal variability
Resumo:
In 'Tales from Ovid' and 'War Music' respectively, Ted Hughes and Christopher Logue turned to classical epic as source material and a model for contemporary poetry. In this essay I consider the different ways in which they work with the original epic poems and how they rework them both textually and generically. In the process, I suggest, Hughes gives his readers an Ovid modeled on his own, vatic conception of Homer, while Logue reworks Homer in a manner that is essentially Ovidian.
Resumo:
Peat soils consist of poorly decomposed plant detritus, preserved by low decay rates, and deep peat deposits are globally significant stores in the carbon cycle. High water tables and low soil temperatures are commonly held to be the primary reasons for low peat decay rates. However, recent studies suggest a thermodynamic limit to peat decay, whereby the slow turnover of peat soil pore water may lead to high concentrations of phenols and dissolved inorganic carbon. In sufficient concentrations, these chemicals may slow or even halt microbial respiration, providing a negative feedback to peat decay. We document the analysis of a simple, one-dimensional theoretical model of peatland pore water residence time distributions (RTDs). The model suggests that broader, thicker peatlands may be more resilient to rapid decay caused by climate change because of slow pore water turnover in deep layers. Even shallow peat deposits may also be resilient to rapid decay if rainfall rates are low. However, the model suggests that even thick peatlands may be vulnerable to rapid decay under prolonged high rainfall rates, which may act to flush pore water with fresh rainwater. We also used the model to illustrate a particular limitation of the diplotelmic (i.e., acrotelm and catotelm) model of peatland structure. Model peatlands of contrasting hydraulic structure exhibited identical water tables but contrasting RTDs. These scenarios would be treated identically by diplotelmic models, although the thermodynamic limit suggests contrasting decay regimes. We therefore conclude that the diplotelmic model be discarded in favor of model schemes that consider continuous variation in peat properties and processes.
Resumo:
In mid-March 2005, a rare lower stratospheric polar vortex filamentation event was observed simultaneously by the JPL lidar at Mauna Loa Observatory, Hawaii, and by the EOS MLS instrument onboard the Aura satellite. The event coincided with the beginning of the spring 2005 final warming. On 16 March, the filament was observed by lidar around 0600 UT between 415 K and 455 K, and by MLS six hours earlier. It was seen on both the lidar and MLS profiles as a layer of enhanced ozone, peaking at 1.7 ppmv in a region where the climatological values are usually around or below 1 ppmv. Ozone profiles measured by lidar and MLS were compared to profiles from the Chemical Transport Model MIMOSA-CHIM. The agreement between lidar, MLS, and the model is excellent considering the difference in the sampling techniques. MLS was also able to identify the filament at another location north of Hawaii.
Resumo:
This study puts forward a method to model and simulate the complex system of hospital on the basis of multi-agent technology. The formation of the agents of hospitals with intelligent and coordinative characteristics was designed, the message object was defined, and the model operating mechanism of autonomous activities and coordination mechanism was also designed. In addition, the Ontology library and Norm library etc. were introduced using semiotic method and theory, to enlarge the method of system modelling. Swarm was used to develop the multi-agent based simulation system, which is favorable for making guidelines for hospital's improving it's organization and management, optimizing the working procedure, improving the quality of medical care as well as reducing medical charge costs.
Resumo:
Global warming is expected to enhance fluxes of fresh water between the surface and atmosphere, causing wet regions to become wetter and dry regions drier, with serious implications for water resource management. Defining the wet and dry regions as the upper 30% and lower 70% of the precipitation totals across the tropics (30° S–30° N) each month we combine observations and climate model simulations to understand changes in the wet and dry regions over the period 1850–2100. Observed decreases in precipitation over dry tropical land (1950–2010) are also simulated by coupled atmosphere–ocean climate models (−0.3%/decade) with trends projected to continue into the 21st century. Discrepancies between observations and simulations over wet land regions since 1950 exist, relating to decadal fluctuations in El Niño southern oscillation, the timing of which is not represented by the coupled simulations. When atmosphere-only simulations are instead driven by observed sea surface temperature they are able to adequately represent this variability over land. Global distributions of precipitation trends are dominated by spatial changes in atmospheric circulation. However, the tendency for already wet regions to become wetter (precipitation increases with warming by 3% K−1 over wet tropical oceans) and the driest regions drier (precipitation decreases of −2% K−1 over dry tropical land regions) emerges over the 21st century in response to the substantial surface warming.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
We use a state-of-the-art ocean general circulation and biogeochemistry model to examine the impact of changes in ocean circulation and biogeochemistry in governing the change in ocean carbon-13 and atmospheric CO2 at the last glacial maximum (LGM). We examine 5 different realisations of the ocean's overturning circulation produced by a fully coupled atmosphere-ocean model under LGM forcing and suggested changes in the atmospheric deposition of iron and phytoplankton physiology at the LGM. Measured changes in carbon-13 and carbon-14, as well as a qualitative reconstruction of the change in ocean carbon export are used to evaluate the results. Overall, we find that while a reduction in ocean ventilation at the LGM is necessary to reproduce carbon-13 and carbon-14 observations, this circulation results in a low net sink for atmospheric CO2. In contrast, while biogeochemical processes contribute little to carbon isotopes, we propose that most of the change in atmospheric CO2 was due to such factors. However, the lesser role for circulation means that when all plausible factors are accounted for, most of the necessary CO2 change remains to be explained. This presents a serious challenge to our understanding of the mechanisms behind changes in the global carbon cycle during the geologic past.
Resumo:
Sea ice friction models are necessary to predict the nature of interactions between sea ice floes. These interactions are of interest on a range of scales, for example, to predict loads on engineering structures in icy waters or to understand the basin-scale motion of sea ice. Many models use Amonton's friction law due to its simplicity. More advanced models allow for hydrodynamic lubrication and refreezing of asperities; however, modeling these processes leads to greatly increased complexity. In this paper we propose, by analogy with rock physics, that a rate- and state-dependent friction law allows us to incorporate memory (and thus the effects of lubrication and bonding) into ice friction models without a great increase in complexity. We support this proposal with experimental data on both the laboratory (∼0.1 m) and ice tank (∼1 m) scale. These experiments show that the effects of static contact under normal load can be incorporated into a friction model. We find the parameters for a first-order rate and state model to be A = 0.310, B = 0.382, and μ0 = 0.872. Such a model then allows us to make predictions about the nature of memory effects in moving ice-ice contacts.
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.
Resumo:
During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.