7 resultados para Web Mining, Data Mining, User Topic Model, Web User Profiles

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently making digital 3D models and replicas of the cultural heritage assets play an important role in the preservation and having a high detail source for future research and intervention. In this dissertation, it is tried to assess different methods for digital surveying and making 3D replicas of cultural heritage assets in different scales of size. The methodologies vary in devices, software, workflow, and the amount of skill that is required. The three phases of the 3D modelling process are data acquisition, modelling, and model presentation. Each of these sections is divided into sub-sections and there are several approaches, methods, devices, and software that may be employed, furthermore, the selection process should be based on the operation's goal, available facilities, the scale and properties of the object or structure to be modeled, as well as the operators' expertise and experience. The most key point to remember is that the 3D modelling operation should be properly accurate, precise, and reliable; therefore, there are so many instructions and pieces of advice on how to perform 3D modelling effectively. It is an attempt to compare and evaluate the various ways of each phase in order to explain and demonstrate their differences, benefits, and drawbacks in order to serve as a simple guide for new and/or inexperienced users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent integral-field spectroscopic (IFS) survey, the MASSIVE survey (Ma et al. 2014), observed the 116 most massive (MK < −25.3 mag, stellar mass M∗ > 10^11.6 M⊙) early-type galaxies (ETGs) within 108 Mpc, out to radii as large as 40 kpc, that correspond to ∼ 2 − 3 effective radii (Re). One of the major findings of the MASSIVE survey is that the galaxy sample is split nearly equally among three groups showing three different velocity dispersion profiles σ(R) outer of a radius ∼ 5 kpc (falling, flat and rising with radius). The purpose of this thesis is to model the kinematic profiles of six ETGs included in the MASSIVE survey and representative of the three observed σ(R) shapes, with the aim of investigating their dynamical structure. Models for the chosen galaxies are built using the numerical code JASMINE (Posacki, Pellegrini, and Ciotti 2013). The code produces models of axisymmetric galaxies, based on the solution of the Jeans equations for a multicomponent gravitational potential (supermassive black hole, stars and dark matter halo). With the aim of having a good agreement between the kinematics obtained from the Jeans equations, and the observed σ and rotation velocity V of MASSIVE (Veale et al. 2016, 2018), I derived constraints on the dark matter distribution and orbital anisotropy. This work suggests a trend of the dark matter amount and distribution with the shape of the velocity dispersion profiles in the outer regions: the models of galaxies with flat or rising velocity dispersion profiles show higher dark matter fractions fDM both within 1 Re and 5 Re. Orbital anisotropy alone cannot account for the different observed trends of σ(R) and has a minor effect compared to variations of the mass profile. Galaxies with similar stellar mass M∗ that show different velocity dispersion profiles (from falling to rising) are successfully modelled with a variation of the halo mass Mh.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the majority of the population of the world lives in cities and that this number is expected to increase in the next years, one of the biggest challenges of the research is the determination of the risk deriving from high temperatures experienced in urban areas, together with improving responses to climate-related disasters, for example by introducing in the urban context vegetation or built infrastructures that can improve the air quality. In this work, we will investigate how different setups of the boundary and initial conditions set on an urban canyon generate different patterns of the dispersion of a pollutant. To do so we will exploit the low computational cost of Reynolds-Averaged Navier-Stokes (RANS) simulations to reproduce the dynamics of an infinite array of two-dimensional square urban canyons. A pollutant is released at the street level to mimic the presence of traffic. RANS simulations are run using the k-ɛ closure model and vertical profiles of significant variables of the urban canyon, namely the velocity, the turbulent kinetic energy, and the concentration, are represented. This is done using the open-source software OpenFOAM and modifying the standard solver simpleFoam to include the concentration equation and the temperature by introducing a buoyancy term in the governing equations. The results of the simulation are validated with experimental results and products of Large-Eddy Simulations (LES) from previous works showing that the simulation is able to reproduce all the quantities under examination with satisfactory accuracy. Moreover, this comparison shows that despite LES are known to be more accurate albeit more expensive, RANS simulations represent a reliable tool if a smaller computational cost is needed. Overall, this work exploits the low computational cost of RANS simulations to produce multiple scenarios useful to evaluate how the dispersion of a pollutant changes by a modification of key variables, such as the temperature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to evaluate the efficacy of the application WebBootCaT to create specialised corpora automatically, investigating the translation of articles of association from Italian into English. The first section reflects on the relevant literature and proposes the utility of corpora for translators. The second section discusses the methodology employed, and the third section analyses the results obtained and comments on how language professionals could possibly exploit the application to its full. The fourth section provides a few concrete usage examples of the thus built corpora, to then conclude that WebBootCaT is a genuinely powerful tool that could be implemented by professional translators in order to save time and improve their translations in the long term.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Il problema relativo alla predizione, la ricerca di pattern predittivi all‘interno dei dati, è stato studiato ampiamente. Molte metodologie robuste ed efficienti sono state sviluppate, procedimenti che si basano sull‘analisi di informazioni numeriche strutturate. Quella testuale, d‘altro canto, è una tipologia di informazione fortemente destrutturata. Quindi, una immediata conclusione, porterebbe a pensare che per l‘analisi predittiva su dati testuali sia necessario sviluppare metodi completamente diversi da quelli ben noti dalle tecniche di data mining. Un problema di predizione può essere risolto utilizzando invece gli stessi metodi : dati testuali e documenti possono essere trasformati in valori numerici, considerando per esempio l‘assenza o la presenza di termini, rendendo di fatto possibile una utilizzazione efficiente delle tecniche già sviluppate. Il text mining abilita la congiunzione di concetti da campi di applicazione estremamente eterogenei. Con l‘immensa quantità di dati testuali presenti, basti pensare, sul World Wide Web, ed in continua crescita a causa dell‘utilizzo pervasivo di smartphones e computers, i campi di applicazione delle analisi di tipo testuale divengono innumerevoli. L‘avvento e la diffusione dei social networks e della pratica di micro blogging abilita le persone alla condivisione di opinioni e stati d‘animo, creando un corpus testuale di dimensioni incalcolabili aggiornato giornalmente. Le nuove tecniche di Sentiment Analysis, o Opinion Mining, si occupano di analizzare lo stato emotivo o la tipologia di opinione espressa all‘interno di un documento testuale. Esse sono discipline attraverso le quali, per esempio, estrarre indicatori dello stato d‘animo di un individuo, oppure di un insieme di individui, creando una rappresentazione dello stato emotivo sociale. L‘andamento dello stato emotivo sociale può condizionare macroscopicamente l‘evolvere di eventi globali? Studi in campo di Economia e Finanza Comportamentale assicurano un legame fra stato emotivo, capacità nel prendere decisioni ed indicatori economici. Grazie alle tecniche disponibili ed alla mole di dati testuali continuamente aggiornati riguardanti lo stato d‘animo di milioni di individui diviene possibile analizzare tali correlazioni. In questo studio viene costruito un sistema per la previsione delle variazioni di indici di borsa, basandosi su dati testuali estratti dalla piattaforma di microblogging Twitter, sotto forma di tweets pubblici; tale sistema include tecniche di miglioramento della previsione basate sullo studio di similarità dei testi, categorizzandone il contributo effettivo alla previsione.