920 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

50.00% 50.00%

Publicador:

Resumo:

We compare the quasi-equilibrium heat balances, as well as their responses to 4×CO2 perturbation, among three global climate models with the aim to identify and explain inter-model differences in ocean heat uptake (OHU) processes. We find that, in quasi-equilibrium, convective and mixed layer processes, as well as eddy-related processes, cause cooling of the subsurface ocean. The cooling is balanced by warming caused by advective and diapycnally diffusive processes. We also find that in the CO2-perturbed climates the largest contribution to OHU comes from changes in vertical mixing processes and the mean circulation, particularly in the extra-tropics, caused both by changes in wind forcing, and by changes in high-latitude buoyancy forcing. There is a substantial warming in the tropics, a significant part of which occurs because of changes in horizontal advection in extra-tropics. Diapycnal diffusion makes only a weak contribution to the OHU, mainly in the tropics, due to increased stratification. There are important qualitative differences in the contribution of eddy-induced advection and isopycnal diffusion to the OHU among the models. The former is related to the different values of the coefficients used in the corresponding scheme. The latter is related to the different tapering formulations of the isopycnal diffusion scheme. These differences affect the OHU in the deep ocean, which is substantial in two of the models, the dominant region of deep warming being the Southern Ocean. However, most of the OHU takes place above 2000 m, and the three models are quantitatively similar in their global OHU efficiency and its breakdown among processes and as a function of latitude.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Two methods are developed to estimate net surface energy fluxes based upon satellite-based reconstructions of radiative fluxes at the top of atmosphere and the atmospheric energy tendencies and transports from the ERA-Interim reanalysis. Method 1 applies the mass adjusted energy divergence from ERA-Interim while method 2 estimates energy divergence based upon the net energy difference at the top of atmosphere and the surface from ERA-Interim. To optimise the surface flux and its variability over ocean, the divergences over land are constrained to match the monthly area mean surface net energy flux variability derived from a simple relationship between the surface net energy flux and the surface temperature change. The energy divergences over the oceans are then adjusted to remove an unphysical residual global mean atmospheric energy divergence. The estimated net surface energy fluxes are compared with other data sets from reanalysis and atmospheric model simulations. The spatial correlation coefficients of multi-annual means between the estimations made here and other data sets are all around 0.9. There are good agreements in area mean anomaly variability over the global ocean, but discrepancies in the trend over the eastern Pacific are apparent.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5) and UK climatic projections 2009 (UKCP09) using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs) are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We study the scaling of the S-3(1)-S-1(0) meson mass splitting and the pseudoscalar weak-decay constants with the mass of the meson, as seen in the available experimental data. We use an effective light-front QCD-inspired dynamical model regulated at short distances to describe the valence component of the pseudoscalar mesons. The experimentally known values of the mass splitting, decay constants (from global lattice-QCD averages) and the pion charge form factor up to 4 [GeV/c](2) are reasonably described by the model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A branch and bound algorithm is proposed to solve the [image omitted]-norm model reduction problem for continuous and discrete-time linear systems, with convergence to the global optimum in a finite time. The lower and upper bounds in the optimization procedure are described by linear matrix inequalities (LMI). Also proposed are two methods with which to reduce the convergence time of the branch and bound algorithm: the first one uses the Hankel singular values as a sufficient condition to stop the algorithm, providing to the method a fast convergence to the global optimum. The second one assumes that the reduced model is in the controllable or observable canonical form. The [image omitted]-norm of the error between the original model and the reduced model is considered. Examples illustrate the application of the proposed method.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

After a short introduction to the nonmesonic weak decay (NMWD) ΛN→nN of Λ-hypernuclei we discuss the long-standing puzzle on the ratio Γn/Γp, and some recent experimental evidences that signalized towards its final solution. Two versions of the Independent-Particle-Shell-Model (IPSM) are employed to account for the nuclear structure of the final residual nuclei. They are: (a) IPSM-a, where no correlation, except for the Pauli principle, is taken into account, and (b) IPSM-b, where the highly excited hole states are considered to be quasi-stationary and are described by Breit-Wigner distributions, whose widths are estimated from the experimental data. We evaluate the coincidence spectra in Λ 4He, Λ 5He, Λ 12C, Λ 16O, and Λ 28Si, as a function of the sum of kinetic energies EnN=En+EN for N=n, p. The recent Brookhaven National Laboratory experiment E788 on Λ 4He, is interpreted within the IPSM. We found that the shapes of all the spectra are basically tailored by the kinematics of the corresponding phase space, depending very weakly on the dynamics, which is gauged here by the one-meson-exchange- potential. In spite of the straightforwardness of the approach a good agreement with data is achieved. This might be an indication that the final-state- interactions and the two-nucleon induced processes are not very important in the decay of this hypernucleus. We have also found that the π+K exchange potential with soft vertex-form-factor cutoffs (Λπ≈0. 7GeV, ΛK≈0.9GeV), is able to account simultaneously for the available experimental data related to Γp and Γn for Λ 4H, and Λ 5He. © 2010 American Institute of Physics.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The adverse effects on Latin America and the Caribbean of the global economic and financial crisis, the worst since the 1930s, have been considerably less than was once feared. Although a run of growth was cut short in 2009 and regional output shrank by 1.9%, the impact of the crisis was limited by the application of countercyclical fiscal and monetary policies by many of the region’s governments. The recovery in the economies, particularly in South America, has gone hand-in-hand with the rapid resurgence of the emerging economies of Asia, with all the favourable consequences this has had for global trade. A similar pattern may be observed regarding the impact of the crisis on labour markets in Latin America and the Caribbean. Although millions of people lost their jobs or had to trade down to lower-quality work, levels of employment (including formal employment) fell by less than originally foreseen. At the same time, real wages rose slightly in a context of falling inflation. The labour market thus stabilized domestic demand, and this contributed to the recovery that began in many countries in late 2009. Improved international trade and financing conditions, and the pick-up in domestic demand driven by macroeconomic policies, have led different commentators to estimate growth in the region’s economy at some 6% in 2010. As detailed in the first part of this edition of the Bulletin, the upturn has been manifested at the regional level by the creation of formal employment, a rise in the employment rate, a decline in joblessness and a moderate increase in real wages. Specifically, it is estimatedthat the regional unemployment rate will have dropped by 0.6 percentage points, from 8.1% in 2009 to 7.5% in 2010. The performance of different countries and subregions has been very uneven, however. On the one hand, there is Brazil, where high economic growth has been accompanied by vigorous creation of formal jobs and the unemployment rate has dropped to levels not seen in a long time. Other countries in South America have benefited from strong demand for natural resources from the Asian countries. Combined with higher domestic demand, this has raised their economic growth rates and had a positive impact on employment indicators. On the other hand, the recovery is still very weak in certain countries and subregions, particularly in the Caribbean, with employment indicators continuing to worsen.Thus, the recovery in the region’s economy in 2010 may be characterized as dynamic but uneven. Growth estimates for 2011 are less favourable. The risks associated with the imbalances in the world economy and the withdrawal of countercyclical fiscal packages are likely to cause the region to grow more slowly in 2011. Accordingly, a small further reduction of between 0.2 and 0.4 percentage points in the unemployment rate is projected for 2011. However, these indicators of recovery do not guarantee growth with decent work in the long term. To bolster the improvement in labour market indicators and generate more productive employment and decent work, the region’s countries need to strengthen their macroeconomic policies, improve regional and global policy coordination, identify and remove bottlenecks in the labour market itself and enhance instruments designed to promote greater equality. Like the rest of the world, the Latin American and Caribbean region is also confronted with the challenge of transforming the way it produces so that its economies can develop along tracks that are sustainable in the long term. Climate change and the consequent challenge of developing and strengthening low-carbon production and consumption patterns will also affect the way people work. A great challenge ahead is to create green jobs that combine decent work with environmentally sustainable production patterns. From this perspective, the second part of this Bulletin discusses the green jobs approach, offering some information on the challenges and opportunities involved in moving towards a sustainable economy in the region and presenting a set of options for addressing environmental issues and the repercussions of climate change in the world of work. Although the debate about the green jobs concept is fairly new in the region, examples already exist and a number of countries have moved ahead with the application of policies and programmes in this area. Costa Rica has formulated a National Climate Change Strategy, for example, whose foremost achievements include professional training in natural-resource management. In Brazil, fuel production from biomass has increased and social housing with solar panelling is being built. A number of other countries in the region are making progress in areas such as ecotourism, sustainable agriculture and infrastructure for climate change adaptation, and in formalizing the work of people who recycle household waste. The shift towards a more environmentally sustainable economy may cause jobs to be destroyed in some economic sectors and created in others. The working world will inevitably undergo major changes. If the issue is approached by way of social dialogue and appropriate public policies, there is a chance to use this shift to create more decent jobs, thereby contributing to growth in the economy, the construction of higher levels of equality and protection for the environment.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.