928 resultados para Data-driven energy e ciency
Resumo:
The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.
Resumo:
Our proposal aims to display the analysis techniques, methodologies as well as the most relevant results expected within the Exhibitium project framework (http://www.exhibitium.com). Awarded by the BBVA Foundation, the Exhibitium project is being developed by an international consortium of several research groups . Its main purpose is to build a comprehensive and structured data repository about temporary art exhibitions, captured from the web, to make them useful and reusable in various domains through open and interoperable data systems.
Resumo:
Il presente elaborato esplora l’attitudine delle organizzazioni nei confronti dei processi di business che le sostengono: dalla semi-assenza di struttura, all’organizzazione funzionale, fino all’avvento del Business Process Reengineering e del Business Process Management, nato come superamento dei limiti e delle problematiche del modello precedente. All’interno del ciclo di vita del BPM, trova spazio la metodologia del process mining, che permette un livello di analisi dei processi a partire dagli event data log, ossia dai dati di registrazione degli eventi, che fanno riferimento a tutte quelle attività supportate da un sistema informativo aziendale. Il process mining può essere visto come naturale ponte che collega le discipline del management basate sui processi (ma non data-driven) e i nuovi sviluppi della business intelligence, capaci di gestire e manipolare l’enorme mole di dati a disposizione delle aziende (ma che non sono process-driven). Nella tesi, i requisiti e le tecnologie che abilitano l’utilizzo della disciplina sono descritti, cosi come le tre tecniche che questa abilita: process discovery, conformance checking e process enhancement. Il process mining è stato utilizzato come strumento principale in un progetto di consulenza da HSPI S.p.A. per conto di un importante cliente italiano, fornitore di piattaforme e di soluzioni IT. Il progetto a cui ho preso parte, descritto all’interno dell’elaborato, ha come scopo quello di sostenere l’organizzazione nel suo piano di improvement delle prestazioni interne e ha permesso di verificare l’applicabilità e i limiti delle tecniche di process mining. Infine, nell’appendice finale, è presente un paper da me realizzato, che raccoglie tutte le applicazioni della disciplina in un contesto di business reale, traendo dati e informazioni da working papers, casi aziendali e da canali diretti. Per la sua validità e completezza, questo documento è stata pubblicato nel sito dell'IEEE Task Force on Process Mining.
Resumo:
Model predictive control (MPC) has often been referred to in literature as a potential method for more efficient control of building heating systems. Though a significant performance improvement can be achieved with an MPC strategy, the complexity introduced to the commissioning of the system is often prohibitive. Models are required which can capture the thermodynamic properties of the building with sufficient accuracy for meaningful predictions to be made. Furthermore, a large number of tuning weights may need to be determined to achieve a desired performance. For MPC to become a practicable alternative, these issues must be addressed. Acknowledging the impact of the external environment as well as the interaction of occupants on the thermal behaviour of the building, in this work, techniques have been developed for deriving building models from data in which large, unmeasured disturbances are present. A spatio-temporal filtering process was introduced to determine estimates of the disturbances from measured data, which were then incorporated with metaheuristic search techniques to derive high-order simulation models, capable of replicating the thermal dynamics of a building. While a high-order simulation model allowed for control strategies to be analysed and compared, low-order models were required for use within the MPC strategy itself. The disturbance estimation techniques were adapted for use with system-identification methods to derive such models. MPC formulations were then derived to enable a more straightforward commissioning process and implemented in a validated simulation platform. A prioritised-objective strategy was developed which allowed for the tuning parameters typically associated with an MPC cost function to be omitted from the formulation by separation of the conflicting requirements of comfort satisfaction and energy reduction within a lexicographic framework. The improved ability of the formulation to be set-up and reconfigured in faulted conditions was shown.
Resumo:
The internet and digital technologies revolutionized the economy. Regulating the digital market has become a priority for the European Union. While promoting innovation and development, EU institutions must assure that the digital market maintains a competitive structure. Among the numerous elements characterizing the digital sector, users’ data are particularly important. Digital services are centered around personal data, the accumulation of which contributed to the centralization of market power in the hands of a few large providers. As a result, data-driven mergers and data-related abuses gained a central role for the purposes of EU antitrust enforcement. In light of these considerations, this work aims at assessing whether EU competition law is well-suited to address data-driven mergers and data-related abuses of dominance. These conducts are of crucial importance to the maintenance of competition in the digital sector, insofar as the accumulation of users’ data constitutes a fundamental competitive advantage. To begin with, part 1 addresses the specific features of the digital market and their impact on the definition of the relevant market and the assessment of dominance by antitrust authorities. Secondly, part 2 analyzes the EU’s case law on data-driven mergers to verify if merger control is well-suited to address these concentrations. Thirdly, part 3 discusses abuses of dominance in the phase of data collection and the legal frameworks applicable to these conducts. Fourthly, part 4 focuses on access to “essential” datasets and the indirect effects of anticompetitive conducts on rivals’ ability to access users’ information. Finally, Part 5 discusses differential pricing practices implemented online and based on personal data. As it will be assessed, the combination of an efficient competition law enforcement and the auspicial adoption of a specific regulation seems to be the best solution to face the challenges raised by “data-related dominance”.
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
Nel panorama aziendale odierno, risulta essere di fondamentale importanza la capacità, da parte di un’azienda o di una società di servizi, di orientare in modo programmatico la propria innovazione in modo tale da poter essere competitivi sul mercato. In molti casi, questo e significa investire una cospicua somma di denaro in progetti che andranno a migliorare aspetti essenziali del prodotto o del servizio e che avranno un importante impatto sulla trasformazione digitale dell’azienda. Lo studio che viene proposto riguarda in particolar modo due approcci che sono tipicamente in antitesi tra loro proprio per il fatto che si basano su due tipologie di dati differenti, i Big Data e i Thick Data. I due approcci sono rispettivamente il Data Science e il Design Thinking. Nel corso dei seguenti capitoli, dopo aver definito gli approcci di Design Thinking e Data Science, verrà definito il concetto di blending e la problematica che ruota attorno all’intersezione dei due metodi di innovazione. Per mettere in evidenza i diversi aspetti che riguardano la tematica, verranno riportati anche casi di aziende che hanno integrato i due approcci nei loro processi di innovazione, ottenendo importanti risultati. In particolar modo verrà riportato il lavoro di ricerca svolto dall’autore riguardo l'esame, la classificazione e l'analisi della letteratura esistente all'intersezione dell'innovazione guidata dai dati e dal pensiero progettuale. Infine viene riportato un caso aziendale che è stato condotto presso la realtà ospedaliero-sanitaria di Parma in cui, a fronte di una problematica relativa al rapporto tra clinici dell’ospedale e clinici del territorio, si è progettato un sistema innovativo attraverso l’utilizzo del Design Thinking. Inoltre, si cercherà di sviluppare un’analisi critica di tipo “what-if” al fine di elaborare un possibile scenario di integrazione di metodi o tecniche provenienti anche dal mondo del Data Science e applicarlo al caso studio in oggetto.
Resumo:
Context. Observations in the cosmological domain are heavily dependent on the validity of the cosmic distance-duality (DD) relation, eta = D(L)(z)(1+ z)(2)/D(A)(z) = 1, an exact result required by the Etherington reciprocity theorem where D(L)(z) and D(A)(z) are, respectively, the luminosity and angular diameter distances. In the limit of very small redshifts D(A)(z) = D(L)(z) and this ratio is trivially satisfied. Measurements of Sunyaev-Zeldovich effect (SZE) and X-rays combined with the DD relation have been used to determine D(A)(z) from galaxy clusters. This combination offers the possibility of testing the validity of the DD relation, as well as determining which physical processes occur in galaxy clusters via their shapes. Aims. We use WMAP (7 years) results by fixing the conventional Lambda CDM model to verify the consistence between the validity of DD relation and different assumptions about galaxy cluster geometries usually adopted in the literature. Methods. We assume that. is a function of the redshift parametrized by two different relations: eta(z) = 1+eta(0)z, and eta(z) = 1+eta(0)z/(1+z), where eta(0) is a constant parameter quantifying the possible departure from the strict validity of the DD relation. In order to determine the probability density function (PDF) of eta(0), we consider the angular diameter distances from galaxy clusters recently studied by two different groups by assuming elliptical (isothermal) and spherical (non-isothermal) beta models. The strict validity of the DD relation will occur only if the maximum value of eta(0) PDF is centered on eta(0) = 0. Results. It was found that the elliptical beta model is in good agreement with the data, showing no violation of the DD relation (PDF peaked close to eta(0) = 0 at 1 sigma), while the spherical (non-isothermal) one is only marginally compatible at 3 sigma. Conclusions. The present results derived by combining the SZE and X-ray surface brightness data from galaxy clusters with the latest WMAP results (7-years) favors the elliptical geometry for galaxy clusters. It is remarkable that a local property like the geometry of galaxy clusters might be constrained by a global argument provided by the cosmic DD relation.
Resumo:
Early reports stated that Au was a catalyst of choice for the BOR because it would yield a near complete faradaic efficiency. However, it has recently been suggested that gold could yield to some extent the heterogeneous hydrolysis of BH(4)(-),therefore lowering the electron count per BH(4)(-), especially at low potential. Actually, the blur will exist regarding the BOR mechanism on Au as long as no physical proof regarding the reaction intermediates is not put forward. In that frame, in situ physical techniques like FTIR exhibit some interest to study the BOR. Consequently, in situ infrared reflectance spectroscopy measurements (SPAIRS technique) have been performed in 1 M NaOH/1 M NaBH(4) on a gold electrode with the aim to detect the intermediate species. We monitored several bands in B-H ((nu) over bar similar to 1180,1080 and 972 cm(-1)) and B-O bond regions ((nu) over bar =1325 and similar to 1425cm(-1)), which appear sequentially as a function of the electrode polarization. These absorption bands are assigned to BH(3), BH(2) and BO(2)(-) species. At the light of the experimental results, possible initial elementary steps of the BOR on gold electrode have been proposed and discussed according to the relevant literature data.
Resumo:
Eight different models to represent the effect of friction in control valves are presented: four models based on physical principles and four empirical ones. The physical models, both static and dynamic, have the same structure. The models are implemented in Simulink/Matlab (R) and compared, using different friction coefficients and input signals. Three of the models were able to reproduce the stick-slip phenomenon and passed all the tests, which were applied following ISA standards. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with excellent properties. The approach is in- spired by the principles of the generalized cross entropy method. The pro- posed density estimation procedure has numerous advantages over the tra- ditional kernel density estimator methods. Firstly, for the first time in the nonparametric literature, the proposed estimator allows for a genuine incor- poration of prior information in the density estimation procedure. Secondly, the approach provides the first data-driven bandwidth selection method that is guaranteed to provide a unique bandwidth for any data. Lastly, simulation examples suggest the proposed approach outperforms the current state of the art in nonparametric density estimation in terms of accuracy and reliability.
Resumo:
Functional magnetic resonance imaging (FMRI) analysis methods can be quite generally divided into hypothesis-driven and data-driven approaches. The former are utilised in the majority of FMRI studies, where a specific haemodynamic response is modelled utilising knowledge of event timing during the scan, and is tested against the data using a t test or a correlation analysis. These approaches often lack the flexibility to account for variability in haemodynamic response across subjects and brain regions which is of specific interest in high-temporal resolution event-related studies. Current data-driven approaches attempt to identify components of interest in the data, but currently do not utilise any physiological information for the discrimination of these components. Here we present a hypothesis-driven approach that is an extension of Friman's maximum correlation modelling method (Neurolmage 16, 454-464, 2002) specifically focused on discriminating the temporal characteristics of event-related haemodynamic activity. Test analyses, on both simulated and real event-related FMRI data, will be presented.
Resumo:
Simultaneous acquisition of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) aims to disentangle the description of brain processes by exploiting the advantages of each technique. Most studies in this field focus on exploring the relationships between fMRI signals and the power spectrum at some specific frequency bands (alpha, beta, etc.). On the other hand, brain mapping of EEG signals (e.g., interictal spikes in epileptic patients) usually assumes an haemodynamic response function for a parametric analysis applying the GLM, as a rough approximation. The integration of the information provided by the high spatial resolution of MR images and the high temporal resolution of EEG may be improved by referencing them by transfer functions, which allows the identification of neural driven areas without strong assumptions about haemodynamic response shapes or brain haemodynamic`s homogeneity. The difference on sampling rate is the first obstacle for a full integration of EEG and fMRI information. Moreover, a parametric specification of a function representing the commonalities of both signals is not established. In this study, we introduce a new data-driven method for estimating the transfer function from EEG signal to fMRI signal at EEG sampling rate. This approach avoids EEG subsampling to fMRI time resolution and naturally provides a test for EEG predictive power over BOLD signal fluctuations, in a well-established statistical framework. We illustrate this concept in resting state (eyes closed) and visual simultaneous fMRI-EEG experiments. The results point out that it is possible to predict the BOLD fluctuations in occipital cortex by using EEG measurements. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Resting state functional magnetic resonance imaging (fMRI) reveals a distinct network of correlated brain function representing a default mode state of the human brain The underlying structural basis of this functional connectivity pattern is still widely unexplored We combined fractional anisotropy measures of fiber tract integrity derived from diffusion tensor imaging (DTI) and resting state fMRI data obtained at 3 Tesla from 20 healthy elderly subjects (56 to 83 years of age) to determine white matter microstructure e 7 underlying default mode connectivity We hypothesized that the functional connectivity between the posterior cingulate and hippocampus from resting state fMRI data Would be associated with the white matter microstructure in the cingulate bundle and fiber tracts connecting posterior cingulate gyrus With lateral temporal lobes, medial temporal lobes, and precuneus This was demonstrated at the p<0001 level using a voxel-based multivariate analysis of covariance (MANCOVA) approach In addition, we used a data-driven technique of joint independent component analysis (ICA) that uncovers spatial pattern that are linked across modalities. It revealed a pattern of white matter tracts including cingulate bundle and associated fiber tracts resembling the findings from the hypothesis-driven analysis and was linked to the pattern of default mode network (DMN) connectivity in the resting state fMRI data Out findings support the notion that the functional connectivity between the posterior cingulate and hippocampus and the functional connectivity across the entire DMN is based oil distinct pattern of anatomical connectivity within the cerebral white matter (C) 2009 Elsevier Inc All rights reserved