933 resultados para VECTOR SPACE MODEL
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.
Resumo:
Initial results are presented from a middle atmosphere extension to a version of the European Centre For Medium Range Weather Forecasting tropospheric model. The extended version of the model has been developed as part of the UK Universities Global Atmospheric Modelling Project and extends from the ground to approximately 90 km. A comprehensive solar radiation scheme is included which uses monthly averaged climatological ozone values. A linearised infrared cooling scheme is employed. The basic climatology of the model is described; the parametrization of drag due to orographically forced gravity waves is shown to have a dramatic effect on the simulations of the winter hemisphere.
Resumo:
A novel analytical model for mixed-phase, unblocked and unseeded orographic precipitation with embedded convection is developed and evaluated. The model takes an idealised background flow and terrain geometry, and calculates the area-averaged precipitation rate and other microphysical quantities. The results provide insight into key physical processes, including cloud condensation, vapour deposition, evaporation, sublimation, as well as precipitation formation and sedimentation (fallout). To account for embedded convection in nominally stratiform clouds, diagnostics for purely convective and purely stratiform clouds are calculated independently and combined using weighting functions based on relevant dynamical and microphysical time scales. An in-depth description of the model is presented, as well as a quantitative assessment of its performance against idealised, convection-permitting numerical simulations with a sophisticated microphysics parameterisation. The model is found to accurately reproduce the simulation diagnostics over most of the parameter space considered.
Resumo:
Vintage-based vector autoregressive models of a single macroeconomic variable are shown to be a useful vehicle for obtaining forecasts of different maturities of future and past observations, including estimates of post-revision values. The forecasting performance of models which include information on annual revisions is superior to that of models which only include the first two data releases. However, the empirical results indicate that a model which reflects the seasonal nature of data releases more closely does not offer much improvement over an unrestricted vintage-based model which includes three rounds of annual revisions.
Resumo:
Numerical climate models constitute the best available tools to tackle the problem of climate prediction. Two assumptions lie at the heart of their suitability: (1) a climate attractor exists, and (2) the numerical climate model's attractor lies on the actual climate attractor, or at least on the projection of the climate attractor on the model's phase space. In this contribution, the Lorenz '63 system is used both as a prototype system and as an imperfect model to investigate the implications of the second assumption. By comparing results drawn from the Lorenz '63 system and from numerical weather and climate models, the implications of using imperfect models for the prediction of weather and climate are discussed. It is shown that the imperfect model's orbit and the system's orbit are essentially different, purely due to model error and not to sensitivity to initial conditions. Furthermore, if a model is a perfect model, then the attractor, reconstructed by sampling a collection of initialised model orbits (forecast orbits), will be invariant to forecast lead time. This conclusion provides an alternative method for the assessment of climate models.
Resumo:
Sea surface temperature (SST) datasets have been generated from satellite observations for the period 1991–2010, intended for use in climate science applications. Attributes of the datasets specifically relevant to climate applications are: first, independence from in situ observations; second, effort to ensure homogeneity and stability through the time-series; third, context-specific uncertainty estimates attached to each SST value; and, fourth, provision of estimates of both skin SST (the fundamental measure- ment, relevant to air-sea fluxes) and SST at standard depth and local time (partly model mediated, enabling comparison with his- torical in situ datasets). These attributes in part reflect requirements solicited from climate data users prior to and during the project. Datasets consisting of SSTs on satellite swaths are derived from the Along-Track Scanning Radiometers (ATSRs) and Advanced Very High Resolution Radiometers (AVHRRs). These are then used as sole SST inputs to a daily, spatially complete, analysis SST product, with a latitude-longitude resolution of 0.05°C and good discrimination of ocean surface thermal features. A product user guide is available, linking to reports describing the datasets’ algorithmic basis, validation results, format, uncer- tainty information and experimental use in trial climate applications. Future versions of the datasets will span at least 1982–2015, better addressing the need in many climate applications for stable records of global SST that are at least 30 years in length.
Resumo:
Spatially dense observations of gust speeds are necessary for various applications, but their availability is limited in space and time. This work presents an approach to help to overcome this problem. The main objective is the generation of synthetic wind gust velocities. With this aim, theoretical wind and gust distributions are estimated from 10 yr of hourly observations collected at 123 synoptic weather stations provided by the German Weather Service. As pre-processing, an exposure correction is applied on measurements of the mean wind velocity to reduce the influence of local urban and topographic effects. The wind gust model is built as a transfer function between distribution parameters of wind and gust velocities. The aim of this procedure is to estimate the parameters of gusts at stations where only wind speed data is available. These parameters can be used to generate synthetic gusts, which can improve the accuracy of return periods at test sites with a lack of observations. The second objective is to determine return periods much longer than the nominal length of the original time series by considering extreme value statistics. Estimates for both local maximum return periods and average return periods for single historical events are provided. The comparison of maximum and average return periods shows that even storms with short average return periods may lead to local wind gusts with return periods of several decades. Despite uncertainties caused by the short length of the observational records, the method leads to consistent results, enabling a wide range of possible applications.
Resumo:
Biological models of an apoptotic process are studied using models describing a system of differential equations derived from reaction kinetics information. The mathematical model is re-formulated in a state-space robust control theory framework where parametric and dynamic uncertainty can be modelled to account for variations naturally occurring in biological processes. We propose to handle the nonlinearities using neural networks.
Resumo:
Building Information Modeling (BIM) is the process of structuring, capturing, creating, and managing a digital representation of physical and/or functional characteristics of a built space [1]. Current BIM has limited ability to represent dynamic semantics, social information, often failing to consider building activity, behavior and context; thus limiting integration with intelligent, built-environment management systems. Research, such as the development of Semantic Exchange Modules, and/or the linking of IFC with semantic web structures, demonstrates the need for building models to better support complex semantic functionality. To implement model semantics effectively, however, it is critical that model designers consider semantic information constructs. This paper discusses semantic models with relation to determining the most suitable information structure. We demonstrate how semantic rigidity can lead to significant long-term problems that can contribute to model failure. A sufficiently detailed feasibility study is advised to maximize the value from the semantic model. In addition we propose a set of questions, to be used during a model’s feasibility study, and guidelines to help assess the most suitable method for managing semantics in a built environment.
Resumo:
Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme.
Resumo:
Traditionally, the cusp has been described in terms of a time-stationary feature of the magnetosphere which allows access of magnetosheath-like plasma to low altitudes. Statistical surveys of data from low-altitude spacecraft have shown the average characteristics and position of the cusp. Recently, however, it has been suggested that the ionospheric footprint of flux transfer events (FTEs) may be identified as variations of the “cusp” on timescales of a few minutes. In this model, the cusp can vary in form between a steady-state feature in one limit and a series of discrete ionospheric FTE signatures in the other limit. If this time-dependent cusp scenario is correct, then the signatures of the transient reconnection events must be able, on average, to reproduce the statistical cusp occurrence previously determined from the satellite observations. In this paper, we predict the precipitation signatures which are associated with transient magnetopause reconnection, following recent observations of the dependence of dayside ionospheric convection on the orientation of the IMF. We then employ a simple model of the longitudinal motion of FTE signatures to show how such events can easily reproduce the local time distribution of cusp occurrence probabilities, as observed by low-altitude satellites. This is true even in the limit where the cusp is a series of discrete events. Furthermore, we investigate the existence of double cusp patches predicted by the simple model and show how these events may be identified in the data.
Resumo:
The impact on the dynamics of the stratosphere of three approaches to geoengineering by solar radiation management is investigated using idealized simulations of a global climate model. The approaches are geoengineering with sulfate aerosols, titania aerosols, and reduction in total solar irradiance (representing mirrors placed in space). If it were possible to use stratospheric aerosols to counterbalance the surface warming produced by a quadrupling of atmospheric carbon dioxide concentrations, tropical lower stratospheric radiative heating would drive a thermal wind response which would intensify the stratospheric polar vortices. In the Northern Hemisphere this intensification results in strong dynamical cooling of the polar stratosphere. Northern Hemisphere stratospheric sudden warming events become rare (one and two in 65 years for sulfate and titania, respectively). The intensification of the polar vortices results in a poleward shift of the tropospheric midlatitude jets in winter. The aerosol radiative heating enhances the tropical upwelling in the lower stratosphere, influencing the strength of the Brewer-Dobson circulation. In contrast, solar dimming does not produce heating of the tropical lower stratosphere, and so there is little intensification of the polar vortex and no enhanced tropical upwelling. The dynamical response to titania aerosol is qualitatively similar to the response to sulfate.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
The nuclear time-dependent Hartree-Fock model formulated in three-dimensional space, based on the full standard Skyrme energy density functional complemented with the tensor force, is presented. Full self-consistency is achieved by the model. The application to the isovector giant dipole resonance is discussed in the linear limit, ranging from spherical nuclei (16O and 120Sn) to systems displaying axial or triaxial deformation (24Mg, 28Si, 178Os, 190W and 238U). Particular attention is paid to the spin-dependent terms from the central sector of the functional, recently included together with the tensor. They turn out to be capable of producing a qualitative change on the strength distribution in this channel. The effect on the deformation properties is also discussed. The quantitative effects on the linear response are small and, overall, the giant dipole energy remains unaffected. Calculations are compared to predictions from the (quasi)-particle random-phase approximation and experimental data where available, finding good agreement
Resumo:
Well-resolved air–sea interactions are simulated in a new ocean mixed-layer, coupled configuration of the Met Office Unified Model (MetUM-GOML), comprising the MetUM coupled to the Multi-Column K Profile Parameterization ocean (MC-KPP). This is the first globally coupled system which provides a vertically resolved, high near-surface resolution ocean at comparable computational cost to running in atmosphere-only mode. As well as being computationally inexpensive, this modelling framework is adaptable– the independent MC-KPP columns can be applied selectively in space and time – and controllable – by using temperature and salinity corrections the model can be constrained to any ocean state. The framework provides a powerful research tool for process-based studies of the impact of air–sea interactions in the global climate system. MetUM simulations have been performed which separate the impact of introducing inter- annual variability in sea surface temperatures (SSTs) from the impact of having atmosphere–ocean feedbacks. The representation of key aspects of tropical and extratropical variability are used to assess the performance of these simulations. Coupling the MetUM to MC-KPP is shown, for example, to reduce tropical precipitation biases, improve the propagation of, and spectral power associated with, the Madden–Julian Oscillation and produce closer-to-observed patterns of springtime blocking activity over the Euro-Atlantic region.