11 resultados para Weighting
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.
Resumo:
Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.
Resumo:
This thesis focuses on the ceramic process for the production of optical grade transparent materials to be used as laser hosts. In order to be transparent a ceramic material must exhibit a very low concentration of defects. Defects are mainly represented by secondary or grain boundary phases and by residual pores. The strict control of the stoichiometry is mandatory to avoid the formation of secondary phases, whereas residual pores need to be below 150 ppm. In order to fulfill these requirements specific experimental conditions must be combined together. In addition powders need to be nanometric or at least sub-micrometric and extremely pure. On the other hand, nanometric powders aggregate easily and this leads to a poor, not homogeneous packing during shaping by pressing and to the formation of residual pores during sintering. Very fine powders are also difficult to handle and tend to absorb water on the surface. Finally, the powder manipulation (weighting operations, solvent removal, spray drying, shaping, etc), easily introduces impurities. All these features must be fully controlled in order to avoid the formation of defects that work as scattering sources thus decreasing the transparency of the material. The important role played by the processing on the transparency of ceramic materials is often underestimated. In the literature a high level of transparency has been reported by many authors but the description of the experimental process, in particular of the powder treatment and shaping, is seldom extensively described and important information that are necessary to reproduce the described results are often missing. The main goal of the present study therefore is to give additional information on the way the experimental features affect the microstructural evolution of YAG-based ceramics and thus the final properties, in particular transparency. Commercial powders are used to prepare YAG materials doped with Nd or Yb by reactive sintering under high vacuum. These dopants have been selected as the more appropriate for high energy and high peak power lasers. As far as it concerns the powder treatment, the thesis focuses on the influence of the solvent removal technique (rotavapor versus spray drying of suspensions in ethanol), the ball milling duration and speed, suspension concentration, solvent ratio, type and amount of dispersant. The influence of the powder type and process on the powder packing as well as the pressure conditions during shaping by pressing are also described. Finally calcination, sintering under high vacuum and in clean atmosphere, and post sintering cycles are studied and related to the final microstructure analyzed by SEM-EDS and HR-TEM, and to the optical and laser properties.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
It has been estimated that one third of edible food destined for human consumption is lost or wasted along the food supply chain globally. Much of the waste comes from Global North, where consumers are considered as the bigger contributors. Different studies tried to analyze and estimate the Household Food Waste (HFW), especially in UK and Northern Europe. The result is that accurate studies at national level exist only in UK, Finland and Norway while no such studies are available in Italy, except for survey- based researches. Though, there is a widespread awareness that such methods might be not able to estimate Food Waste. Results emerging from literature clearly suggest that survey estimate inferior amounts of Food Waste as a result, if compared to waste sorting and weighting analysis or to diary studies. The hypothesis that household food waste is under-estimated when gathered through questionnaires has been enquired into. First, a literature review of behavioral economics and heuristics has been proposed; then, a literature review of the sector listing the existing methodologies to gather national data on Household Food Waste has been illustrated. Finally, a pilot experiment to test a mixed methodology is proposed. While literature suggests that four specific cognitive biases might be able to affect the reliability of answers in questionnaires, results of the present experiment clearly indicate that there is a relevant difference between how much the individual thinks to waste and he/she actually does. The result is a mixed methodology based on questionnaire, diary and waste sorting, able to overcome the cons of each single method.
Resumo:
The topic of the Ph.D project focuses on the modelling of the soil-water dynamics inside an instrumented embankment section along Secchia River (Cavezzo (MO)) in the period from 2017 to 2018 and the quantification of the performance of the direct and indirect simulations . The commercial code Hydrus2D by Pc-Progress has been chosen to run the direct simulations. Different soil-hydraulic models have been adopted and compared. The parameters of the different hydraulic models are calibrated using a local optimization method based on the Levenberg - Marquardt algorithm implemented in the Hydrus package. The calibration program is carried out using different types of dataset of observation points, different weighting distributions, different combinations of optimized parameters and different initial sets of parameters. The final goal is an in-depth study of the potentialities and limits of the inverse analysis when applied to a complex geotechnical problem as the case study. The second part of the research focuses on the effects of plant roots and soil-vegetation-atmosphere interaction on the spatial and temporal distribution of pore water pressure in soil. The investigated soil belongs to the West Charlestown Bypass embankment, Newcastle, Australia, that showed in the past years shallow instabilities and the use of long stem planting is intended to stabilize the slope. The chosen plant species is the Malaleuca Styphelioides, native of eastern Australia. The research activity included the design and realization of a specific large scale apparatus for laboratory experiments. Local suction measurements at certain intervals of depth and radial distances from the root bulb are recorded within the vegetated soil mass under controlled boundary conditions. The experiments are then reproduced numerically using the commercial code Hydrus 2D. Laboratory data are used to calibrate the RWU parameters and the parameters of the hydraulic model.
Resumo:
This dissertation consists of three standalone articles that contribute to the economics literature concerning technology adoption, information diffusion, and network economics in one way or another, using a couple of primary data sources from Ethiopia. The first empirical paper identifies the main behavioral factors affecting the adoption of brand new (radical) and upgraded (incremental) bioenergy innovations in Ethiopia. The results highlight the importance of targeting different instruments to increase the adoption rate of the two types of innovations. The second and the third empirical papers of this thesis, use primary data collected from 3,693 high school students in Ethiopia, and shed light on how we should select informants to effectively and equitably disseminate new information, mainly concerning environmental issues. There are different well-recognized standard centrality measures that are used to select informants. These standard centrality measures, however, are based on the network topology---shaped only by the number of connections---and fail to incorporate the intrinsic motivations of the informants. This thesis introduces an augmented centrality measure (ACM) by modifying the eigenvector centrality measure through weighting the adjacency matrix with the altruism levels of connected nodes. The results from the two papers suggest that targeting informants based on network position and behavioral attributes ensures more effective and equitable (gender perspective) transmission of information in social networks than selecting informants on network centrality measures alone. Notably, when the information is concerned with environmental issues.
Resumo:
It is still unknown whether traditional risk factors may have a sex specific impact on the severity of coronary artery disease (CAD) and subsequent mortality in acute coronary syndromes (ACS). We identified 14 793 patients who underwent coronary angiography for acute coronary syndromes in the ISACS-TC (NCT01218776) registry from 2010 to 2019. The main outcome measure was the association between conventional risk factors and severity of CAD and its relationship with 30-day mortality. Risk ratios (RRs) and 95% CIs were calculated from the ratio of the absolute risks of women versus men using inverse probability of weighting. Severity of disease was categorized as obstructive (≥50% stenosis) versus nonobstructive CAD, specifically Ischemia and No Obstructive Coronary Artery disease (INOCA) and Myocardial Infarction with Non obstructive Coronary Arteries (MINOCA). The RR ratio for obstructive CAD in women versus men among people without diabetes mellitus was 0.49(95%CI,0.41–0.60) and among those with diabetes mellitus was 0.89(95% CI,0.62–1.29), with an interaction by diabetes mellitus status of P =0.002. Exposure to smoking shifted the RR ratios from 0.50 (95% CI, 0.41–0.61) in nonsmokers to 0.75 (95%CI, 0.54–1.03) in current smokers, with an interaction by smoking status of P=0.018. There were no significant sex-related interactions with hypercholesterolemia and hypertension. Women with obstructive CAD had higher 30-day mortality rates than men (RR, 1.75; 95% CI, 1.48–2.07). No sex differences in mortality were observed in patients with INOCA/MINOCA. In conclusion, obstructive CAD in women signifies a higher risk for mortality compared with men. Current smoking and diabetes mellitus disproportionally increase the risk of obstructive CAD in women. Achieving the goal of improving cardiovascular health in women still requires intensive efforts toward further implementation of lifestyle and treatment interventions.
Resumo:
Objective: To investigate the association between the four traditional coronary heart disease (CHD) risk factors (hypertension, smoking, hypercholesterolemia, and diabetes) and outcomes of first ACS. Methods: Data were drawn from the ISACS Archives. The study participants consisted of 70953 patients with first ACS, but without prior CHD. Primary outcomes were patient’ age at hospital presentation and 30-day all-cause mortality. The risk ratios for mortality among subgroups were calculated using a balancing strategy by inverse probability weighting. Trends were evaluated by Pearson's correlation coefficient (r). Results: For fatal ACS (n=6097), exposure to at least one traditional CHD-risk factor ranged from 77.6% in women to 74.5% in men. The presence of all four CHD-risk factors significantly decreased the age at time of ACS event and death by nearly half a decade compared with the absence of any traditional risk factors in both women (from 67.1±12.0 to 61.9±10.3 years; r=-0.089, P<0.001) and men (from 62.8±12.2 to 58.9±9.9 years; r=-0.096, P<0.001). By contrast, there was an inverse association between the number of traditional CHD-risk factors and 30-day mortality. The mortality rates in women ranged from 7.7% with four traditional CHD-risk factors to 16.3% with no traditional risk factors (r=0.073, P<0.001). The corresponding rates in men were 4.8% and 11.5% (r=0.078, P<0.001), respectively. The risk ratios among individuals with at least one CHD-risk factors vs. those with no traditional risk factors were 0.72 (95%CI:0.65-0.79) in women and 0.64 (95%CI:0.59-0.70) in men. This association was consistent among patient subgroups managed with guideline-recommended therapeutic options. Conclusions: The vast majority of patients who die for ACS have traditional CHD-risk factor exposure. Patients with CHD-risk factors die much earlier in life, but they have a lower relative risk of 30-day mortality than those with no traditional CHD-risk factors, even in the context of equitable evidence‐based treatments after hospital admission.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.