913 resultados para Data-driven analysis
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
In this thesis we focus on the analysis and interpretation of time dependent deformations recorded through different geodetic methods. Firstly, we apply a variational Bayesian Independent Component Analysis (vbICA) technique to GPS daily displacement solutions, to separate the postseismic deformation that followed the mainshocks of the 2016-2017 Central Italy seismic sequence from the other, hydrological, deformation sources. By interpreting the signal associated with the postseismic relaxation, we model an afterslip distribution on the faults involved by the mainshocks consistent with the co-seismic models available in literature. We find evidences of aseismic slip on the Paganica fault, responsible for the Mw 6.1 2009 L’Aquila earthquake, highlighting the importance of aseismic slip and static stress transfer to properly model the recurrence of earthquakes on nearby fault segments. We infer a possible viscoelastic relaxation of the lower crust as a contributing mechanism to the postseismic displacements. We highlight the importance of a proper separation of the hydrological signals for an accurate assessment of the tectonic processes, especially in cases of mm-scale deformations. Contextually, we provide a physical explanation to the ICs associated with the observed hydrological processes. In the second part of the thesis, we focus on strain data from Gladwin Tensor Strainmeters, working on the instruments deployed in Taiwan. We develop a novel approach, completely data driven, to calibrate these strainmeters. We carry out a joint analysis of geodetic (strainmeters, GPS and GRACE products) and hydrological (rain gauges and piezometers) data sets, to characterize the hydrological signals in Southern Taiwan. Lastly, we apply the calibration approach here proposed to the strainmeters recently installed in Central Italy. We provide, as an example, the detection of a storm that hit the Umbria-Marche regions (Italy), demonstrating the potential of strainmeters in following the dynamics of deformation processes with limited spatio-temporal signature
Resumo:
In this thesis, the viability of the Dynamic Mode Decomposition (DMD) as a technique to analyze and model complex dynamic real-world systems is presented. This method derives, directly from data, computationally efficient reduced-order models (ROMs) which can replace too onerous or unavailable high-fidelity physics-based models. Optimizations and extensions to the standard implementation of the methodology are proposed, investigating diverse case studies related to the decoding of complex flow phenomena. The flexibility of this data-driven technique allows its application to high-fidelity fluid dynamics simulations, as well as time series of real systems observations. The resulting ROMs are tested against two tasks: (i) reduction of the storage requirements of high-fidelity simulations or observations; (ii) interpolation and extrapolation of missing data. The capabilities of DMD can also be exploited to alleviate the cost of onerous studies that require many simulations, such as uncertainty quantification analysis, especially when dealing with complex high-dimensional systems. In this context, a novel approach to address parameter variability issues when modeling systems with space and time-variant response is proposed. Specifically, DMD is merged with another model-reduction technique, namely the Polynomial Chaos Expansion, for uncertainty quantification purposes. Useful guidelines for DMD deployment result from the study, together with the demonstration of its potential to ease diagnosis and scenario analysis when complex flow processes are involved.
Resumo:
Nel panorama aziendale odierno, risulta essere di fondamentale importanza la capacità, da parte di un’azienda o di una società di servizi, di orientare in modo programmatico la propria innovazione in modo tale da poter essere competitivi sul mercato. In molti casi, questo e significa investire una cospicua somma di denaro in progetti che andranno a migliorare aspetti essenziali del prodotto o del servizio e che avranno un importante impatto sulla trasformazione digitale dell’azienda. Lo studio che viene proposto riguarda in particolar modo due approcci che sono tipicamente in antitesi tra loro proprio per il fatto che si basano su due tipologie di dati differenti, i Big Data e i Thick Data. I due approcci sono rispettivamente il Data Science e il Design Thinking. Nel corso dei seguenti capitoli, dopo aver definito gli approcci di Design Thinking e Data Science, verrà definito il concetto di blending e la problematica che ruota attorno all’intersezione dei due metodi di innovazione. Per mettere in evidenza i diversi aspetti che riguardano la tematica, verranno riportati anche casi di aziende che hanno integrato i due approcci nei loro processi di innovazione, ottenendo importanti risultati. In particolar modo verrà riportato il lavoro di ricerca svolto dall’autore riguardo l'esame, la classificazione e l'analisi della letteratura esistente all'intersezione dell'innovazione guidata dai dati e dal pensiero progettuale. Infine viene riportato un caso aziendale che è stato condotto presso la realtà ospedaliero-sanitaria di Parma in cui, a fronte di una problematica relativa al rapporto tra clinici dell’ospedale e clinici del territorio, si è progettato un sistema innovativo attraverso l’utilizzo del Design Thinking. Inoltre, si cercherà di sviluppare un’analisi critica di tipo “what-if” al fine di elaborare un possibile scenario di integrazione di metodi o tecniche provenienti anche dal mondo del Data Science e applicarlo al caso studio in oggetto.
Resumo:
The microabrasion technique of enamel consists of selectively abrading the discolored areas or causing superficial structural changes in a selective way. In microabrasion technique, abrasive products associated with acids are used, and the evaluation of enamel roughness after this treatment, as well as surface polishing, is necessary. This in-vitro study evaluated the enamel roughness after microabrasion, followed by different polishing techniques. Roughness analyses were performed before microabrasion (L1), after microabrasion (L2), and after polishing (L3).Thus, 60 bovine incisive teeth divided into two groups were selected (n=30): G1- 37% phosphoric acid (37%) (Dentsply) and pumice; G2- hydrochloric acid (6.6%) associated with silicon carbide (Opalustre - Ultradent). Thereafter, the groups were divided into three sub-groups (n=10), according to the system of polishing: A - Fine and superfine granulation aluminum oxide discs (SofLex 3M); B - Diamond Paste (FGM) associated with felt discs (FGM); C - Silicone tips (Enhance - Dentsply). A PROC MIXED procedure was applied after data exploratory analysis, as well as the Tukey-Kramer test (5%). No statistical differences were found between G1 and G2 groups. L2 differed statistically from L1 and showed superior amounts of roughness. Differences in the amounts of post-polishing roughness for specific groups (1A, 2B, and 1C) arose, which demonstrated less roughness in L3 and differed statistically from L2 in the polishing system. All products increased enamel roughness, and the effectiveness of the polishing systems was dependent upon the abrasive used.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Eight different models to represent the effect of friction in control valves are presented: four models based on physical principles and four empirical ones. The physical models, both static and dynamic, have the same structure. The models are implemented in Simulink/Matlab (R) and compared, using different friction coefficients and input signals. Three of the models were able to reproduce the stick-slip phenomenon and passed all the tests, which were applied following ISA standards. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Diketopiperazine (DKP) derivatives, named colletopiperazine, fusaperazine C and E as well as four known DKPs were isolated from cultures of Colletotrichum gloeosporioides, Penicillium crustosum, both endophytic fungi isolated from Viguiera robusta, and a Fusarium spp., an endophyte of Viguiera arenaria, respectively. Their structures were established on the basis of their spectroscopic data. Conformational analysis of two known DKPs showed that folded conformations were as energetically stable as the extended one. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with excellent properties. The approach is in- spired by the principles of the generalized cross entropy method. The pro- posed density estimation procedure has numerous advantages over the tra- ditional kernel density estimator methods. Firstly, for the first time in the nonparametric literature, the proposed estimator allows for a genuine incor- poration of prior information in the density estimation procedure. Secondly, the approach provides the first data-driven bandwidth selection method that is guaranteed to provide a unique bandwidth for any data. Lastly, simulation examples suggest the proposed approach outperforms the current state of the art in nonparametric density estimation in terms of accuracy and reliability.
Resumo:
In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Background-The importance of serum triglyceride levels as a risk factor for cardiovascular diseases is uncertain. Methods and Results-We performed an individual participant data meta-analysis of prospective studies conducted in the Asia-Pacific region. Cox models were applied to the combined data from 26 studies to estimate the overall and region-, sex-, and age-specific hazard ratios for major cardiovascular diseases by fifths of triglyceride values. During 796 671 person-years of follow-up among 96 224 individuals, 670 and 667 deaths as a result of coronary heart disease (CHD) and stroke, respectively, were recorded. After adjustment for major cardiovascular risk factors, participants grouped in the highest fifth of triglyceride levels had a 70% (95% CI, 47 to 96) greater risk of CHD death, an 80% (95% CI, 49 to 119) higher risk of fatal or nonfatal CHD, and a 50% (95% CI, 29% to 76%) increased risk of fatal or nonfatal stroke compared with those belonging to the lowest fifth. The association between triglycerides and CHD death was similar across subgroups defined by ethnicity, age, and sex. Conclusions-Serum triglycerides are an important and independent predictor of CHD and stroke risk in the Asia-Pacific region. These results may have clinical implications for cardiovascular risk prediction and the use of lipid-lowering therapy.
Resumo:
PURPOSE: Many guidelines advocate measurement of total or low density lipoprotein cholesterol (LDL), high density lipoprotein cholesterol (HDL), and triglycerides (TG) to determine treatment recommendations for preventing coronary heart disease (CHD) and cardiovascular disease (CVD). This analysis is a comparison of lipid variables as predictors of cardiovascular disease. METHODS: Hazard ratios for coronary and cardiovascular deaths by fourths of total cholesterol (TC), LDL, HDL, TG, non-HDL, TC/HDL, and TG/HDL values, and for a one standard deviation change in these variables, were derived in an individual participant data meta-analysis of 32 cohort studies conducted in the Asia-Pacific region. The predictive value of each lipid variable was assessed using the likelihood ratio statistic. RESULTS: Adjusting for confounders and regression dilution, each lipid variable had a positive (negative for HDL) log-linear association with fatal CHD and CVD. Individuals in the highest fourth of each lipid variable had approximately twice the risk of CHD compared with those with lowest levels. TG and HDL were each better predictors of CHD and CVD risk compared with TC alone, with test statistics similar to TC/HDL and TG/HDL ratios. Calculated LDL was a relatively poor predictor. CONCLUSIONS: While LDL reduction remains the main target of intervention for lipid-lowering, these data support the potential use of TG or lipid ratios for CHD risk prediction. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Objective: High levels of domestic violence, mental illness, and alienation from authorities are associated with high incidence of children/adolescents living on the streets in low and middle income countries. The Equilibrium Project (Programa Equilibrio) was created to facilitate social reintegration through a virtual partnership between an academic psychiatric institute and highly vulnerable children and adolescents living on the streets, in group shelter with supervision, and in other high risk situations. Methods: Descriptive presentation of qualitative data and analysis of preliminary empirical data collected over a 24-month period. Results: Dialogue between academic professionals, street children, and city officials shaped The Equilibrium Project over the last 2 years. The program has progressively moved from a professional clinic setting to a community-based but protected activity center with recreational and professional services and an emphasis on linkage with social service agencies, city government and law enforcement officials in an academic research context. A total of 351 patients have been served of whom virtually all were neglected by their parents, 58.4% report physical or sexual abuse, 88.89% have been diagnosed with a psychiatric disorder, 40.4% drug use. After 2 years of operation, 63.5% (n = 223) successfully completed or continue in treatment and 34.8% (n = 122) were reunited with their families. Conclusions and Practice implications: Program development guided by consumer input led to a successful program offering professional services in a protected community setting that facilitates social reintegration by providing ""go between"" services integrating relationships between alienated consumers and formal psychiatric, pediatric, social service, and criminal justice systems. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this study we use region-level panel data on rice production in Vietnam to investigate total factor productivity (TFP) growth in the period since reunification in 1975. Two significant reforms were introduced during this period, one in 1981 allowing farmers to keep part of their produce, and another in 1987 providing improved land tenure. We measure TFP growth using two modified forms of the standard Malmquist data envelopment analysis (DEA) method, which we have named the Three-year-window (TYW) and the Full Cumulative (FC) methods. We have developed these methods to deal with degrees of freedom limitations. Our empirical results indicate strong average TFP growth of between 3.3 and 3.5 per cent per annum, with the fastest growth observed in the period following the first reform. Our results support the assertion that incentive related issues have played a large role in the decline and subsequent resurgence of Vietnamese agriculture.