893 resultados para data analysis: algorithms and implementation
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.
Resumo:
Today’s data are increasingly complex and classical statistical techniques need growingly more refined mathematical tools to be able to model and investigate them. Paradigmatic situations are represented by data which need to be considered up to some kind of trans- formation and all those circumstances in which the analyst finds himself in the need of defining a general concept of shape. Topological Data Analysis (TDA) is a field which is fundamentally contributing to such challenges by extracting topological information from data with a plethora of interpretable and computationally accessible pipelines. We con- tribute to this field by developing a series of novel tools, techniques and applications to work with a particular topological summary called merge tree. To analyze sets of merge trees we introduce a novel metric structure along with an algorithm to compute it, define a framework to compare different functions defined on merge trees and investigate the metric space obtained with the aforementioned metric. Different geometric and topolog- ical properties of the space of merge trees are established, with the aim of obtaining a deeper understanding of such trees. To showcase the effectiveness of the proposed metric, we develop an application in the field of Functional Data Analysis, working with functions up to homeomorphic reparametrization, and in the field of radiomics, where each patient is represented via a clustering dendrogram.
Resumo:
LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.
Resumo:
In the recent decades, robotics has become firmly embedded in areas such as education, teaching, medicine, psychology and many others. We focus here on social robotics; social robots are designed to interact with people in a natural and interpersonal way, often to achieve positive results in different applications. To interact and cooperate with humans in their daily-life activities, robots should exhibit human-like intelligence. The rapid expansion of social robotics and the existence of various kinds of robots on the market have allowed research groups to carry out multiple experiments. The experiments carried out have led to the collections of various kinds of data, which can be used or processed for psychological studies, and studies in other fields. However, there are no tools available in which data can be stored, processed and shared with other research groups. This thesis proposes the design and implementation of visual tool for organizing dataflows in Human Robot Interaction (HRI).
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.
Resumo:
Inter-vehicular communications have been gaining momentum throughout the last years and they now occupy a prominent position among the objectives of car manufacturers. Motorcycle manufacturers want to keep pace with the 4 wheels world in order to make Powered Two-wheelers (PTW) integral part of the future connected mobility. The requirements for implementing inter-vehicular communication systems for motorcycles are the subjects of discussion in this thesis. The first purpose of this thesis is to introduce the reader to the world of vehicle-to-everything (V2X) communications, focusing on the Cooperative Intelligent Transport Systems (C-ITS) and the two main current technologies: ITS-G5, which is based on IEEE 802.11p, and cellular vehicle-to-everything (C-V2X). The evolution of these technologies will be also treated. Afterwards, the core of this work is presented: the analysis of the system architecture, including hardware, security, HMI, and peculiar challenges, for implementing V2X systems in motorcycles.
Resumo:
This paper aims to investigate the influence of some dissolved air flotation (DAF) process variables (specifically: the hydraulic detention time in the contact zone and the supplied dissolved air concentration) and the pH values, as pretreatment chemical variables, on the micro-bubble size distribution (BSD) in a DAF contact zone. This work was carried out in a pilot plant where bubbles were measured by an appropriate non-intrusive image acquisition system. The results show that the obtained diameter ranges were in agreement with values reported in the literature (10-100mm), quite independently of the investigated conditions. The linear average diameter varied from 20 to 30mm, or equivalently, the Sauter (d(3,2)) diameter varied from 40 to 50mm. In all investigated conditions, D(50) was between 75% and 95%. The BSD might present different profile (with a bimodal curve trend), however, when analyzing the volumetric frequency distribution (in some cases with the appearance of peaks in diameters ranging from 90-100mm). Regarding volumetric frequency analysis, all the investigated parameters can modify the BSD in DAF contact zone after the release point, thus potentially causing changes in DAF kinetics. This finding prompts further research in order to verify the effect of these BSD changes on solid particle removal efficiency by DAF.
Resumo:
In order to examine whether different populations show the same pattern of onset in the Southern Hemisphere, we examined the age-at-first-admission distribution for schizophrenia based on mental health registers from Australia and Brazil. Data on age-at-first-admission for individuals with schizophrenia were extracted from two names-linked registers, (1) the Queensland Mental Health Statistics System, Australia (N=7651, F= 3293, M=4358), and (2) a psychiatric hospital register in Pelotas, Brazil (N=4428, F=2220, M=2208). Age distributions were derived for males and females for both datasets. The general population structure tbr both countries was also obtained. There were significantly more males in the Queensland dataset (gz = 56.9, df3, p < 0.0001 ). Both dataset distributions were skewed to the right. Onset rose steeply after puberty to reach a modal age group of 20-29 for men and women, with a more gradual tail toward the older age groups. In Queensland 68% of women with schizophrenia had their first admissions after age 30, while the proportion from Brazil was 58%. Compared to the Australian dataset, the Brazilian dataset had a slightly greater proportion of first admissions under the age 30 and a slightly smaller proportion over the age of 60 years. This reflects the underlying age distributions of the two populations. This study confirms the wide age range and gender differences in age-at-first-admission distributions for schizophrenia and identified a significant difference in the gender ratio between the two datasets. Given widely differing health services, cultural practices, ethic variability, and the different underlying population distributions, the age-at-first-admission in Queensland and Brazil showed more similarities than differences. Acknowledgments: The Stanley Foundation supported this project.
Resumo:
OBJECTIVE: To describe variation in all cause and selected cause-specific mortality rates across Australia. METHODS: Mortality and population data for 1997 were obtained from the Australian Bureau of Statistics. All cause and selected cause-specific mortality rates were calculated and directly standardised to the 1997 Australian population in 5-year age groups. Selected major causes of death included cancer, coronary artery disease, cerebrovascular disease, diabetes, accidents and suicide. Rates are reported by statistical division, and State and Territory. RESULTS: All cause age-standardised mortality was 6.98 per 1000 in 1997 and this varied 2-fold from a low in the statistical division of Pilbara, Western Australia (5.78, 95% confidence interval 5.06-6.56), to a high in Northern Territory-excluding Darwin (11.30, 10.67-11.98). Similar mortality variation (all p<0.0001) exists for cancer (1.01-2.23 per 1000) and coronary artery disease (0.99-2.23 per 1000), the two biggest killers. Larger variation (all p<0.0001) exists for cerebrovascular disease (0.7-11.8 per 10,000), diabetes (0.7-6.9 per 10,000), accidents (1.7-7.2 per 10,000) and suicide (0.6-3.8 per 10,000). Less marked variation was observed when analysed by State and Territory. but Northern Territory consistently has the highest age-standardised mortality rates. CONCLUSIONS: Analysed by statistical division, substantial mortality gradients exist across Australia, suggesting an inequitable distribution of the determinants of health. Further research is required to better understand this heterogeneity.