12 resultados para automation of fit analysis
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The aim of this dissertation is to show the power of contrastive analysis in successfully predicting the errors a language learner will make by means of a concrete case study. First, there is a description of what language transfer is and why it is important in the matter of second language acquisition. Second, a brief explanation of the history and development of contrastive analysis will be offered. Third, the focus of the thesis will move to an analysis of errors usually made by language learners. To conclude, the dissertation will focus on the concrete case study of a Russian learner of English: after an analysis of the errors the student is likely to make, a recorded conversation will be examined.
Resumo:
Artificial Intelligence (AI) is gaining ever more ground in every sphere of human life, to the point that it is now even used to pass sentences in courts. The use of AI in the field of Law is however deemed quite controversial, as it could provide more objectivity yet entail an abuse of power as well, given that bias in algorithms behind AI may cause lack of accuracy. As a product of AI, machine translation is being increasingly used in the field of Law too in order to translate laws, judgements, contracts, etc. between different languages and different legal systems. In the legal setting of Company Law, accuracy of the content and suitability of terminology play a crucial role within a translation task, as any addition or omission of content or mistranslation of terms could entail legal consequences for companies. The purpose of the present study is to first assess which neural machine translation system between DeepL and ModernMT produces a more suitable translation from Italian into German of the atto costitutivo of an Italian s.r.l. in terms of accuracy of the content and correctness of terminology, and then to assess which translation proves to be closer to a human reference translation. In order to achieve the above-mentioned aims, two human and automatic evaluations are carried out based on the MQM taxonomy and the BLEU metric. Results of both evaluations show an overall better performance delivered by ModernMT in terms of content accuracy, suitability of terminology, and closeness to a human translation. As emerged from the MQM-based evaluation, its accuracy and terminology errors account for just 8.43% (as opposed to DeepL’s 9.22%), while it obtains an overall BLEU score of 29.14 (against DeepL’s 27.02). The overall performances however show that machines still face barriers in overcoming semantic complexity, tackling polysemy, and choosing domain-specific terminology, which suggests that the discrepancy with human translation may still be remarkable.
Resumo:
The work for the present thesis started in California, during my semester as an exchange student overseas. California is known worldwide for its seismicity and its effort in the earthquake engineering research field. For this reason, I immediately found interesting the Structural Dynamics Professor, Maria Q. Feng's proposal, to work on a pushover analysis of the existing Jamboree Road Overcrossing bridge. Concrete is a popular building material in California, and for the most part, it serves its functions well. However, concrete is inherently brittle and performs poorly during earthquakes if not reinforced properly. The San Fernando Earthquake of 1971 dramatically demonstrated this characteristic. Shortly thereafter, code writers revised the design provisions for new concrete buildings so to provide adequate ductility to resist strong ground shaking. There remain, nonetheless, millions of square feet of non-ductile concrete buildings in California. The purpose of this work is to perform a Pushover Analysis and compare the results with those of a Nonlinear Time-History Analysis of an existing bridge, located in Southern California. The analyses have been executed through the software OpenSees, the Open System for Earthquake Engineering Simulation. The bridge Jamboree Road Overcrossing is classified as a Standard Ordinary Bridge. In fact, the JRO is a typical three-span continuous cast-in-place prestressed post-tension box-girder. The total length of the bridge is 366 ft., and the height of the two bents are respectively 26,41 ft. and 28,41 ft.. Both the Pushover Analysis and the Nonlinear Time-History Analysis require the use of a model that takes into account for the nonlinearities of the system. In fact, in order to execute nonlinear analyses of highway bridges it is essential to incorporate an accurate model of the material behavior. It has been observed that, after the occurrence of destructive earthquakes, one of the most damaged elements on highway bridges is a column. To evaluate the performance of bridge columns during seismic events an adequate model of the column must be incorporated. Part of the work of the present thesis is, in fact, dedicated to the modeling of bents. Different types of nonlinear element have been studied and modeled, with emphasis on the plasticity zone length determination and location. Furthermore, different models for concrete and steel materials have been considered, and the selection of the parameters that define the constitutive laws of the different materials have been accurate. The work is structured into four chapters, to follow a brief overview of the content. The first chapter introduces the concepts related to capacity design, as the actual philosophy of seismic design. Furthermore, nonlinear analyses both static, pushover, and dynamic, time-history, are presented. The final paragraph concludes with a short description on how to determine the seismic demand at a specific site, according to the latest design criteria in California. The second chapter deals with the formulation of force-based finite elements and the issues regarding the objectivity of the response in nonlinear field. Both concentrated and distributed plasticity elements are discussed into detail. The third chapter presents the existing structure, the software used OpenSees, and the modeling assumptions and issues. The creation of the nonlinear model represents a central part in this work. Nonlinear material constitutive laws, for concrete and reinforcing steel, are discussed into detail; as well as the different scenarios employed in the columns modeling. Finally, the results of the pushover analysis are presented in chapter four. Capacity curves are examined for the different model scenarios used, and failure modes of concrete and steel are discussed. Capacity curve is converted into capacity spectrum and intersected with the design spectrum. In the last paragraph, the results of nonlinear time-history analyses are compared to those of pushover analysis.
Resumo:
The aim of the work is to conduct a finite element model analysis on a small – size concrete beam and on a full size concrete beam internally reinforced with BFRP exposed at elevated temperatures. Experimental tests performed at Kingston University have been used to compare the results from the numerical analysis for the small – size concrete beam. Once the behavior of the small – size beam at room temperature is investigated and switching to the heating phase reinforced beams are tested at 100°C, 200°C and 300°C in loaded condition. The aim of the finite element analysis is to reflect the three – point bending test adopted into the oven during the exposure of the beam at room temperature and at elevated temperatures. Performance and deformability of reinforced beams are straightly correlated to the material properties and a wide analysis on elastic modulus and coefficient of thermal expansion is given in this work. Develop a good correlation between the numerical model and the experimental test is the main objective of the analysis on the small – size concrete beam, for both modelling the aim is also to estimate which is the deterioration of the material properties due to the heating process and the influence of different parameters on the final result. The focus of the full – size modelling which involved the last part of this work is to evaluate the effect of elevated temperatures, the material deterioration and the deflection trend on a reinforced beam characterized by a different size. A comparison between the results from different modelling has been developed.
Parametric Sensitivity Analysis of the Most Recent Computational Models of Rabbit Cardiac Pacemaking
Resumo:
The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.
Resumo:
The mass estimation of galaxy clusters is a crucial point for modern cosmology, and can be obtained by several different techniques. In this work we discuss a new method to measure the mass of galaxy clusters connecting the gravitational potential of the cluster with the kinematical properties of its surroundings. We explore the dynamics of the structures located in the region outside virialized cluster, We identify groups of galaxies, as sheets or filaments, in the cluster outer region, and model how the cluster gravitational potential perturbs the motion of these structures from the Hubble fow. This identification is done in the redshift space where we look for overdensities with a filamentary shape. Then we use a radial mean velocity profile that has been found as a quite universal trend in simulations, and we fit the radial infall velocity profile of the overdensities found. The method has been tested on several cluster-size haloes from cosmological N-body simulations giving results in very good agreement with the true values of virial masses of the haloes and orientation of the sheets. We then applied the method to the Coma cluster and even in this case we found a good correspondence with previous. It is possible to notice a mass discrepancy between sheets with different alignments respect to the center of the cluster. This difference can be used to reproduce the shape of the cluster, and to demonstrate that the spherical symmetry is not always a valid assumption. In fact, if the cluster is not spherical, sheets oriented along different axes should feel a slightly different gravitational potential, and so give different masses as result of the analysis described before. Even this estimation has been tested on cosmological simulations and then applied to Coma, showing the actual non-sphericity of this cluster.
Resumo:
This paperwork compares the a numerical validation of the finite element model (FEM) with respect the experimental tests of a new generation wind turbine blade designed by TPI Composites Inc. called BSDS (Blade System Design Study). The research is focused on the analysis by finite element (FE) of the BSDS blade and its comparison with respect the experimental data from static and dynamic investigations. The goal of the research is to create a general procedure which is based on a finite element model and will be used to create an accurate digital copy for any kind of blade. The blade prototype was created in SolidWorks and the blade of Sandia National Laboratories Blade System Design Study was accurately reproduced. At a later stage the SolidWorks model was imported in Ansys Mechanical APDL where the shell geometry was created and modal, static and fatigue analysis were carried out. The outcomes of the FEM analysis were compared with the real test on the BSDS blade at Clarkson University laboratory carried out by a new procedures called Blade Test Facility that includes different methods for both the static and dynamic test of the wind turbine blade. The outcomes from the FEM analysis reproduce the real behavior of the blade subjected to static loads in a very satisfying way. A most detailed study about the material properties could improve the accuracy of the analysis.
Resumo:
Damage tolerance analysis is a quite new methodology based on prescribed inspections. The load spectra used to derive results of these analysis strongly influence the final defined inspections programs that for this reason must be as much as possible representative of load acting on the considered structural component and at the same time, obtained reducing both cost and time. The principal purpose of our work is in improving the actual condition developing a complete numerical Damage Tolerance analysis, able to prescribe inspection programs on typical aircraft critical components, respecting DT regulations, starting from much more specific load spectrum then those actually used today. In particular, these more specific load spectrum to design against fatigue have been obtained through an appositively derived flight simulator developed in a Matlab/Simulink environment. This dynamic model has been designed so that it can be used to simulate typical missions performing manually (joystick inputs) or completely automatic (reference trajectory need to be provided) flights. Once these flights have been simulated, model’s outputs are used to generate load spectrum that are then processed to get information (peaks, valleys) to perform statistical and/or comparison consideration with other load spectrum. However, also much more useful information (loads amplitude) have been extracted from these generated load spectrum to perform the previously mentioned predictions (Rainflow counting method is applied for this purpose). The entire developed methodology works in a complete automatic way, so that, once some specified input parameters have been introduced and different typical flights have been simulated both, manually or automatically, it is able to relate the effects of these simulated flights with the reduction of residual strength of the considered component.
Resumo:
The goal of my study is to investigate the relationship between selected deictic shields on the pronoun ‘I’ and the involvement/detachment dichotomy in a sample of television news interviews. I focus on the use of personal pronouns in political discourse. Drawing upon Caffi’s (2007) classification of mitigating devices into bushes, hedges and shields, I focus on deictic shields on the pronoun ‘I’: I examine the way a selection of ‘I’-related deictic shields is employed in a collection of news interviews broadcast during the electoral campaign prior to the UK 2015 General Election. My purpose is to uncover the frequencies of each of the linguistic items selected and the pragmatic functions of those linguistic items in the involvement/detachment dichotomy. The research is structured as follows. Chapter 1 provides an account of previous studies on the three main areas of research: speech event analysis, institutional interaction and the news interview, and the UK 2015 General Election television programmes. Chapter 2 is centred on the involvement/detachment dichotomy: I provide an overview of nonlinguistic and linguistic features of involvement and detachment at all levels of sentence structure. Chapter 3 contains a detailed account of the data collection and data analysis process. Chapter 4 provides an accurate description of results in three steps: quantitative analysis, qualitative analysis and discussion of the pragmatic functions of the selected linguistic features of involvement and detachment. Chapter 5 includes a brief summary of the investigation, reviews the main findings, and indicates limitations of the study and possible inputs for further research. The results of the analysis confirm that, while some of the linguistic items examined point toward involvement, others have a detaching effect. I therefore conclude that deictic shields on the pronoun ‘I’ permit the realisation of the involvement/detachment dichotomy in the speech genre of the news interview.
Resumo:
The work analyses the tourist water demand in Benidorm, a sun-and-sand destination ranked fourth in Spain by number of visitors, where tourism competes with local residents, nature, agriculture and industrial sectors for scarce water resources. In particular, we have studied the correlation between the water consumption of 83 hotels in Benidorm and their characteristics and services which can impact water use. For this purpose, we have examined the water consumption billed, by the water utility company HIDRAQUA in the period January 2010 - October 2022, to the tourist structures in the municipality of Benidorm, and we have explored the hotels’ features, thanks to the collaboration of the tourism and hotels association HOSBEC. To give a better understanding and contextualization of our analysis we first described explained the of the complex water supply system and the efforts that have been made to reduce the threat posed by the peculiar climate conditions of the region. We saw that the water consumption per guest has slightly decreased in the recent years and that the tourist flux has increased: the global pandemic posed a stop to travels for more than one year, but now both the tourist flux and the tourist water consumption are reaching pre-pandemic level. We found that larger hotels, and in particular the ones opened all the year, that probably tend to offer more water-demanding service with respect to the seasonal ones, have higher water consumption per bed. From the analysis of the role of the different hotel characteristics over the water demand patterns, we found that water use increases with the increase in the hotel category and in the ratio between the surface area of the swimming pool and hotel size (number of beds). Other factors impacting the consumption are the presence of an on-site laundry for washing the hotel linen, the garden, and the implementation of environmental policies for water-saving.
Resumo:
Despite the success of the ΛCDM model in describing the Universe, a possible tension between early- and late-Universe cosmological measurements is calling for new independent cosmological probes. Amongst the most promising ones, gravitational waves (GWs) can provide a self-calibrated measurement of the luminosity distance. However, to obtain cosmological constraints, additional information is needed to break the degeneracy between parameters in the gravitational waveform. In this thesis, we exploit the latest LIGO-Virgo-KAGRA Gravitational Wave Transient Catalog (GWTC-3) of GW sources to constrain the background cosmological parameters together with the astrophysical properties of Binary Black Holes (BBHs), using information from their mass distribution. We expand the public code MGCosmoPop, previously used for the application of this technique, by implementing a state-of-the-art model for the mass distribution, needed to account for the presence of non-trivial features, i.e. a truncated power law with two additional Gaussian peaks, referred to as Multipeak. We then analyse GWTC-3 comparing this model with simpler and more commonly adopted ones, both in the case of fixed and varying cosmology, and assess their goodness-of-fit with different model selection criteria, and their constraining power on the cosmological and population parameters. We also start to explore different sampling methods, namely Markov Chain Monte Carlo and Nested Sampling, comparing their performances and evaluating the advantages of both. We find concurring evidence that the Multipeak model is favoured by the data, in line with previous results, and show that this conclusion is robust to the variation of the cosmological parameters. We find a constraint on the Hubble constant of H0 = 61.10+38.65−22.43 km/s/Mpc (68% C.L.), which shows the potential of this method in providing independent constraints on cosmological parameters. The results obtained in this work have been included in [1].
Resumo:
There are many natural events that can negatively affect the urban ecosystem, but weather-climate variations are certainly among the most significant. The history of settlements has been characterized by extreme events like earthquakes and floods, which repeat themselves at different times, causing extensive damage to the built heritage on a structural and urban scale. Changes in climate also alter various climatic subsystems, changing rainfall regimes and hydrological cycles, increasing the frequency and intensity of extreme precipitation events (heavy rainfall). From an hydrological risk perspective, it is crucial to understand future events that could occur and their magnitude in order to design safer infrastructures. Unfortunately, it is not easy to understand future scenarios as the complexity of climate is enormous. For this thesis, precipitation and discharge extremes were primarily used as data sources. It is important to underline that the two data sets are not separated: changes in rainfall regime, due to climate change, could significantly affect overflows into receiving water bodies. It is imperative that we understand and model climate change effects on water structures to support the development of adaptation strategies. The main purpose of this thesis is to search for suitable water structures for a road located along the Tione River. Therefore, through the analysis of the area from a hydrological point of view, we aim to guarantee the safety of the infrastructure over time. The observations made have the purpose to underline how models such as a stochastic one can improve the quality of an analysis for design purposes, and influence choices.