10 resultados para Comparison between methods of analysis

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainable development is one of the biggest challenges of the twenty fist-century. Various university has begun the debate about the content of this concept and the ways in which to integrate it into their policy, organization and activities. Universities have a special responsibility to take over a leading position by demonstrating best practices that sustain and educate a sustainable society. For that reason universities have the opportunity to create the culture of sustainability for today’s student, and to set their expectations for how the world should be. This thesis aim at analyzing how Delft University of Technology and University of Bologna face the challenge of becoming a sustainable campus. In this context, both universities have been studied and analyzed following the International Sustainable Campus Network (ISCN) methodology that provides a common framework to formalize commitments and goals at campus level. In particular this work has been aimed to highlight which key performance indicators are essential to reach sustainability as a consequence the following aspects has been taken into consideration: energy use, water use, solid waste and recycling, carbon emission. Subsequently, in order to provide a better understanding of the current state of sustainability on University of Bologna and Delft University of Technology, and potential strategies to achieve the stated objective, a SWOT Analysis has been undertaken. Strengths, weaknesses, opportunities and threats have been shown to understand how the two universities can implement a synergy to improve each other. In the direction of framing a “Sustainable SWOT” has been considered the model proposed by People and Planet, so it has been necessary to evaluate important matters as for instance policy, investment, management, education and engagement. Regarding this, it has been fundamental to involve the main sustainability coordinators of the two universities, this has been achieved through a brainstorming session. Partnerships are key to the achievement of sustainability. The creation of a bridge between two universities aims to join forces and to create a new generation of talent. As a result, people can become able to support universities in the exchange of information, ideas, and best practices for achieving sustainable campus operations and integrating sustainability in research and teaching. For this purpose the project "SUCCESS" has been presented, the project aims to create an interactive European campus network that can be considered a strategic key player for sustainable campus innovation in Europe. Specifically, the main key performance indicators have been analyzed and the importance they have for the two universities and their strategic impact have been highlighted. For this reason, a survey was conducted with people who play crucial roles for sustainability within the two universities and they were asked to evaluate the KPIs of the project. This assessment has been relevant because has represented the foundation to develop a strategy to create a true collaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Industry 4.0 refers to the 4th industrial revolution and at its bases, we can see the digitalization and the automation of the assembly line. The whole production process has improved and evolved thanks to the advances made in networking, and AI studies, which include of course machine learning, cloud computing, IoT, and other technologies that are finally being implemented into the industrial scenario. All these technologies have in common a need for faster, more secure, robust, and reliable communication. One of the many solutions for these demands is the use of mobile communication technologies in the industrial environment, but which technology is better suited for these demands? Of course, the answer isn’t as simple as it seems. The 4th industrial revolution has a never seen incomparable potential with respect to the previous ones, every factory, enterprise, or company have different network demands, and even in each of these infrastructures, the demands may diversify by sector, or by application. For example, in the health care industry, there may be e a need for increased bandwidth for the analysis of high-definition videos or, faster speeds in order to have analytics occur in real-time, and again another application might be higher security and reliability to protect patients’ data. As seen above, choosing the right technology for the right environment and application, considers many things, and the ones just stated are but a speck of dust with respect to the overall picture. In this thesis, we will investigate a comparison between the use of two of the available technologies in use for the industrial environment: Wi-Fi 6 and 5G Private Networks in the specific case of a steel factory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ITA]La demenza consiste nel deterioramento, spesso progressivo, dello stato cognitivo di un individuo. Chi è affetto da demenza, presenta alterazioni a livello cognitivo, comportamentale e motorio, ad esempio compiendo gesti ossessivi, ripetitivi, senza uno scopo preciso. La condizione dei pazienti affetti da demenza è valutata clinicamente tramite apposite scale e le informazioni relative al comportamento vengono raccolte intervistando chi se ne occupa, come familiari, il personale infermieristico o il medico curante. Spesso queste valutazioni si rivelano inaccurate, possono essere fortemente influenzate da considerazioni soggettive, e sono dispendiose in termini di tempo. Si ha quindi l'esigenza di disporre di metodiche oggettive per valutare il comportamento motorio dei pazienti e le sue alterazioni patologiche; i sensori inerziali indossabili potrebbero costituire una valida soluzione, per questo scopo. L'obiettivo principale della presente attività di tesi è stato definire e implementare un software per una valutazione oggettiva, basata su sensori, del pattern motorio circadiano, in pazienti affetti da demenza ricoverati in un'unità di terapia a lungo termine, che potrebbe evidenziare differenze nei sintomi della malattia che interessano il comportamento motorio, come descritto in ambito clinico. Lo scopo secondario è stato quello di verificare i cambiamenti motori pre- e post-intervento in un sottogruppo di pazienti, a seguito della somministrazione di un programma sperimentale di intervento basato su esercizi fisici. --------------- [ENG]Dementia involves deterioration, often progressive, of a person's cognitive status. Those who suffer from dementia, present alterations in cognitive and motor behavior, for example performing obsessive and repetitive gestures, without a purpose. The condition of patients suffering from dementia is clinically assessed by means of specific scales and information relating to the behavior are collected by interviewing caregivers, such as the family, nurses, or the doctor. Often it turns out that these are inaccurate assessments that may be heavily influenced by subjective evaluations and are costly in terms of time. Therefore, there is the need for objective methods to assess the patients' motor behavior and the pathological changes; wearable inertial sensors may represent a viable option, so this aim. The main objective of this thesis project was to define and implement a software for a sensor-based assessment of the circadian motor pattern in patients suffering from dementia, hospitalized in a long-term care unit, which could highlight differences in the disease symptoms affecting the motor behavior, as described in the clinical setting. The secondary objective was to verify pre- and post-intervention changes in the motor patterns of a subgroup of patients, following the administration of an experimental program of intervention based on physical exercises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis was the study of an optimal vertical mixing parameterization scheme in a mesoscale dominated field characterized from a strong vorticity and the presence of a layer of colder, less saline water at about 100 m depth (Atlantic Waters); in these conditions we compared six different experiments, that differ by the turbulent closure schemes, the presence or not of an enhanced diffusion parameterization and the presence or not of a double diffusion mixing parameterization. To evaluate the performance of the experiments and the model we compared the simulations with the ARGO observations of temperature and salinity available in our domain, in our period of interest. The conclusions were the following: • the increase of the resolution gives better results in terms of temperature in all the considered cases, and in terms of salinity. • The comparisons between the Pacanovski-Philander and the TKE turbulent closure schemes don’t show significant differences when the simulations are compared to the observations. • The removing of the enhanced diffusion parameterization in presence of the TKE turbulent closure submodel doesn’t give positive results, and show limitations in the resolving of gravitational instabilities near the surface • The k-ϵ turbulent closure model utilized in all the GLS experiments, is the best performing closure model among the three considered, with positive results in all the salinity comparison with the in situ observation and in most of the temperature comparisons. • The double mixing parameterization utilized in the k-ϵ closure submodel improves the results of the experiments improving both the temperature and salinity in comparison with the ARGO data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last few years, the massive popularity of video streaming platforms has managed to impact our daily habits by making the watching of movies and TV shows one of the main activities of our free time. By providing a wide range of foreign language audiovisual content, these entertainment services may represent a powerful resource for language learners, as they provide them with the possibility to be exposed to authentic input. Moreover, research has shown the beneficial role of audiovisual textual aids such as native language subtitles and target language captions in enhancing language skills such as vocabulary and listening comprehension. The aim of this thesis is to analyze the existing literature on the subject of subtitled and captioned audiovisual materials used as a pedagogical tool for informal language learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In food and beverage industry, packaging plays a crucial role in protecting food and beverages and maintaining their organoleptic properties. Their disposal, unfortunately, is still difficult, mainly because there is a lack of economically viable systems for separating composite and multilayer materials. It is therefore necessary not only to increase research in this area, but also to set up pilot plants and implement these technologies on an industrial scale. LCA (Life Cycle Assessment) can fulfil these purposes. It allows an assessment of the potential environmental impacts associated with a product, service or process. The objective of this thesis work is to analyze the environmental performance of six separation methods, designed for separating the polymeric from the aluminum fraction in multilayered packaging. The first four methods utilize the chemical dissolution technique using Biodiesel, Cyclohexane, 2-Methyltetrahydrofuran (2-MeTHF) and Cyclopentyl-methyl-ether (CPME) as solvents. The last two applied the mechanical delamination technique with surfactant-activated water, using Ammonium laurate and Triethanolamine laurate as surfactants, respectively. For all six methods, the LCA methodology was applied and the corresponding models were built with the GaBi software version 10.6.2.9, specifically for LCA analyses. Unfortunately, due to a lack of data, it was not possible to obtain the results of the dissolution methods with the solvents 2-MeTHF and CPME; for the other methods, however, the individual environmental performances were calculated. Results revealed that the methods with the best environmental performance are method 2, for dissolution methods, and method 5, for delamination methods. This result is confirmed both by the analysis of normalized and weighted results and by the analysis of 'original' results. An hotspots analysis was also conducted.