5 resultados para Variational Analysis
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.
Resumo:
The aim of this study was to develop a model capable to capture the different contributions which characterize the nonlinear behaviour of reinforced concrete structures. In particular, especially for non slender structures, the contribution to the nonlinear deformation due to bending may be not sufficient to determine the structural response. Two different models characterized by a fibre beam-column element are here proposed. These models can reproduce the flexure-shear interaction in the nonlinear range, with the purpose to improve the analysis in shear-critical structures. The first element discussed is based on flexibility formulation which is associated with the Modified Compression Field Theory as material constitutive law. The other model described in this thesis is based on a three-field variational formulation which is associated with a 3D generalized plastic-damage model as constitutive relationship. The first model proposed in this thesis was developed trying to combine a fibre beamcolumn element based on the flexibility formulation with the MCFT theory as constitutive relationship. The flexibility formulation, in fact, seems to be particularly effective for analysis in the nonlinear field. Just the coupling between the fibre element to model the structure and the shear panel to model the individual fibres allows to describe the nonlinear response associated to flexure and shear, and especially their interaction in the nonlinear field. The model was implemented in an original matlab® computer code, for describing the response of generic structures. The simulations carried out allowed to verify the field of working of the model. Comparisons with available experimental results related to reinforced concrete shears wall were performed in order to validate the model. These results are characterized by the peculiarity of distinguishing the different contributions due to flexure and shear separately. The presented simulations were carried out, in particular, for monotonic loading. The model was tested also through numerical comparisons with other computer programs. Finally it was applied for performing a numerical study on the influence of the nonlinear shear response for non slender reinforced concrete (RC) members. Another approach to the problem has been studied during a period of research at the University of California Berkeley. The beam formulation follows the assumptions of the Timoshenko shear beam theory for the displacement field, and uses a three-field variational formulation in the derivation of the element response. A generalized plasticity model is implemented for structural steel and a 3D plastic-damage model is used for the simulation of concrete. The transverse normal stress is used to satisfy the transverse equilibrium equations of at each control section, this criterion is also used for the condensation of degrees of freedom from the 3D constitutive material to a beam element. In this thesis is presented the beam formulation and the constitutive relationships, different analysis and comparisons are still carrying out between the two model presented.
Resumo:
In this Thesis we consider a class of second order partial differential operators with non-negative characteristic form and with smooth coefficients. Main assumptions on the relevant operators are hypoellipticity and existence of a well-behaved global fundamental solution. We first make a deep analysis of the L-Green function for arbitrary open sets and of its applications to the Representation Theorems of Riesz-type for L-subharmonic and L-superharmonic functions. Then, we prove an Inverse Mean value Theorem characterizing the superlevel sets of the fundamental solution by means of L-harmonic functions. Furthermore, we establish a Lebesgue-type result showing the role of the mean-integal operator in solving the homogeneus Dirichlet problem related to L in the Perron-Wiener sense. Finally, we compare Perron-Wiener and weak variational solutions of the homogeneous Dirichlet problem, under specific hypothesis on the boundary datum.
Resumo:
A composite is a material made out of two or more constituents (phases) combined together in order to achieve desirable mechanical or thermal properties. Such innovative materials have been widely used in a large variety of engineering fields in the past decades. The design of a composite structure requires the resolution of a multiscale problem that involves a macroscale (i.e. the structural scale) and a microscale. The latter plays a crucial role in the determination of the material behavior at the macroscale, especially when dealing with constituents characterized by nonlinearities. For this reason, numerical tools are required in order to design composite structures by taking into account of their microstructure. These tools need to provide an accurate yet efficient solution in terms of time and memory requirements, due to the large number of internal variables of the problem. This issue is addressed by different methods that overcome this problem by reducing the number of internal variables. Within this framework, this thesis focuses on the development of a new homogenization technique named Mixed TFA (MxTFA) in order to solve the homogenization problem for nonlinear composites. This technique is based on a mixed-stress variational approach involving self-equilibrated stresses and plastic multiplier as independent variables on the Reference Volume Element (RVE). The MxTFA is developed for the case of elastoplasticity and viscoplasticity, and it is implemented into a multiscale analysis for nonlinear composites. Numerical results show the efficiency of the presented techniques, both at microscale and at macroscale level.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.