8 resultados para weighting triangles

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on the ceramic process for the production of optical grade transparent materials to be used as laser hosts. In order to be transparent a ceramic material must exhibit a very low concentration of defects. Defects are mainly represented by secondary or grain boundary phases and by residual pores. The strict control of the stoichiometry is mandatory to avoid the formation of secondary phases, whereas residual pores need to be below 150 ppm. In order to fulfill these requirements specific experimental conditions must be combined together. In addition powders need to be nanometric or at least sub-micrometric and extremely pure. On the other hand, nanometric powders aggregate easily and this leads to a poor, not homogeneous packing during shaping by pressing and to the formation of residual pores during sintering. Very fine powders are also difficult to handle and tend to absorb water on the surface. Finally, the powder manipulation (weighting operations, solvent removal, spray drying, shaping, etc), easily introduces impurities. All these features must be fully controlled in order to avoid the formation of defects that work as scattering sources thus decreasing the transparency of the material. The important role played by the processing on the transparency of ceramic materials is often underestimated. In the literature a high level of transparency has been reported by many authors but the description of the experimental process, in particular of the powder treatment and shaping, is seldom extensively described and important information that are necessary to reproduce the described results are often missing. The main goal of the present study therefore is to give additional information on the way the experimental features affect the microstructural evolution of YAG-based ceramics and thus the final properties, in particular transparency. Commercial powders are used to prepare YAG materials doped with Nd or Yb by reactive sintering under high vacuum. These dopants have been selected as the more appropriate for high energy and high peak power lasers. As far as it concerns the powder treatment, the thesis focuses on the influence of the solvent removal technique (rotavapor versus spray drying of suspensions in ethanol), the ball milling duration and speed, suspension concentration, solvent ratio, type and amount of dispersant. The influence of the powder type and process on the powder packing as well as the pressure conditions during shaping by pressing are also described. Finally calcination, sintering under high vacuum and in clean atmosphere, and post sintering cycles are studied and related to the final microstructure analyzed by SEM-EDS and HR-TEM, and to the optical and laser properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been estimated that one third of edible food destined for human consumption is lost or wasted along the food supply chain globally. Much of the waste comes from Global North, where consumers are considered as the bigger contributors. Different studies tried to analyze and estimate the Household Food Waste (HFW), especially in UK and Northern Europe. The result is that accurate studies at national level exist only in UK, Finland and Norway while no such studies are available in Italy, except for survey- based researches. Though, there is a widespread awareness that such methods might be not able to estimate Food Waste. Results emerging from literature clearly suggest that survey estimate inferior amounts of Food Waste as a result, if compared to waste sorting and weighting analysis or to diary studies. The hypothesis that household food waste is under-estimated when gathered through questionnaires has been enquired into. First, a literature review of behavioral economics and heuristics has been proposed; then, a literature review of the sector listing the existing methodologies to gather national data on Household Food Waste has been illustrated. Finally, a pilot experiment to test a mixed methodology is proposed. While literature suggests that four specific cognitive biases might be able to affect the reliability of answers in questionnaires, results of the present experiment clearly indicate that there is a relevant difference between how much the individual thinks to waste and he/she actually does. The result is a mixed methodology based on questionnaire, diary and waste sorting, able to overcome the cons of each single method.