7 resultados para initial value problem
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
The olive oil extraction industry is responsible for the production of high quantities of vegetation waters, represented by the constitutive water of the olive fruit and by the water used during the process. This by-product represent an environmental problem in the olive’s cultivation areas because of its high content of organic matter, with high value of BOD5 and COD. For that reason the disposal of the vegetation water is very difficult and needs a previous depollution. The organic matter of vegetation water mainly consists of polysaccharides, sugars, proteins, organic acids, oil and polyphenols. This last compounds are the principal responsible for the pollution problems, due to their antimicrobial activity, but, at the same time they are well known for their antioxidant properties. The most concentrate phenolic compounds in waters and also in virgin olive oils are secoiridoids like oleuropein, demethyloleuropein and ligstroside derivatives (the dialdehydic form of elenolic acid linked to 3,4-DHPEA, or p-HPEA (3,4-DHPEA-EDA or p-HPEA-EDA) and an isomer of the oleuropein aglycon (3,4-DHPEA-EA). The management of the olive oil vegetation water has been extensively investigated and several different valorisation methods have been proposed, such as the direct use as fertilizer or the transformation by physico-chemical or biological treatments. During the last years researchers focused their interest on the recovery of the phenolic fraction from this waste looking for its exploitation as a natural antioxidant source. At the present only few contributes have been aimed to the utilization for a large scale phenols recovery and further investigations are required for the evaluation of feasibility and costs of the proposed processes. The present PhD thesis reports a preliminary description of a new industrial scale process for the recovery of the phenolic fraction from olive oil vegetation water treated with enzymes, by direct membrane filtration (microfiltration/ultrafiltration with a cut-off of 250 KDa, ultrafiltration with a cut-off of 7 KDa/10 KDa and nanofiltration/reverse osmosis), partial purification by the use of a purification system based on SPE analysis and by a liquid-liquid extraction system (LLE) with contemporary reduction of the pollution related problems. The phenolic fractions of all the samples obtained were qualitatively and quantitatively by HPLC analysis. The work efficiency in terms of flows and in terms of phenolic recovery gave good results. The final phenolic recovery is about 60% respect the initial content in the vegetation waters. The final concentrate has shown a high content of phenols that allow to hypothesize a possible use as zootechnic nutritional supplements. The purification of the final concentrate have garanteed an high purity level of the phenolic extract especially in SPE analysis by the use of XAD-16 (73% of the total phenolic content of the concentrate). This purity level could permit a future food industry employment such as food additive, or, thanks to the strong antioxidant activity, it would be also use in pharmaceutical or cosmetic industry. The vegetation water depollutant activity has brought good results, as a matter of fact the final reverse osmosis permeate has a low pollutant rate in terms of COD and BOD5 values (2% of the initial vegetation water), that could determinate a recycling use in the virgin olive oil mechanical extraction system producing a water saving and reducing thus the oil industry disposal costs .
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
This thesis, after presenting recent advances obtained for the two-dimensional bin packing problem, focuses on the case where guillotine restrictions are imposed. A mathematical characterization of non-guillotine patterns is provided and the relation between the solution value of the two-dimensional problem with guillotine restrictions and the two-dimensional problem unrestricted is being studied from a worst-case perspective. Finally it presents a new heuristic algorithm, for the two-dimensional problem with guillotine restrictions, based on partial enumeration, and computationally evaluates its performance on a large set of instances from the literature. Computational experiments show that the algorithm is able to produce proven optimal solutions for a large number of problems, and gives a tight approximation of the optimum in the remaining cases.
Resumo:
Compared to other, plastic materials have registered a strong acceleration in production and consumption during the last years. Despite the existence of waste management systems, plastic_based materials are still a pervasive presence in the environment, with negative consequences on marine ecosystem and human health. The recycling is still challenging due to the growing complexity of product design, the so-called overpackaging, the insufficient and inadequate recycling infrastructure, the weak market of recycled plastics and the high cost of waste treatment and disposal. The Circular economy package, the European Strategy for plastics in a circular economy and the recent European Green Deal include very ambitious programmes to rethink the entire plastic value chain. As regards packaging, all plastic packaging will have to be 100% recyclable (or reusable) and 55% recycled by 2030. Regions are consequently called upon to set up a robust plan able to fit the European objectives. It takes on greater importance in Emilia Romagna where the Packaging valley is located. This thesis supports the definition of a strategy aimed to establish an after-use plastics economy in the region. The PhD work has set the basis and the instruments to establish the so-called Circularity Strategy with the aim to turn about 92.000t of plastic waste into profitable secondary resources. System innovation, life cycle thinking and participative backcasting method have allowed to deeply analyse the current system, orientate the problem and explore sustainable solutions through a broad stakeholder participation. A material flow analysis, accompanied by a barrier analysis, has supported the identification of the gaps between the present situation and the 2030 scenario. Eco-design for and from recycling (and a mass _based recycling rate (based on the effective amount of plastic wastes turned into secondary plastics), valorized by a value_based indicator, are the key-points of the action plan.
Resumo:
In this PhD thesis a new firm level conditional risk measure is developed. It is named Joint Value at Risk (JVaR) and is defined as a quantile of a conditional distribution of interest, where the conditioning event is a latent upper tail event. It addresses the problem of how risk changes under extreme volatility scenarios. The properties of JVaR are studied based on a stochastic volatility representation of the underlying process. We prove that JVaR is leverage consistent, i.e. it is an increasing function of the dependence parameter in the stochastic representation. A feasible class of nonparametric M-estimators is introduced by exploiting the elicitability of quantiles and the stochastic ordering theory. Consistency and asymptotic normality of the two stage M-estimator are derived, and a simulation study is reported to illustrate its finite-sample properties. Parametric estimation methods are also discussed. The relation with the VaR is exploited to introduce a volatility contribution measure, and a tail risk measure is also proposed. The analysis of the dynamic JVaR is presented based on asymmetric stochastic volatility models. Empirical results with S&P500 data show that accounting for extreme volatility levels is relevant to better characterize the evolution of risk. The work is complemented by a review of the literature, where we provide an overview on quantile risk measures, elicitable functionals and several stochastic orderings.