21 resultados para Bayesian Normal Mixture Model, Data Binning, Data Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of applying European default traffic values to the making of a noise map was evaluated in a typical environment like Palma de Mallorca. To assess these default traffic values, a first model has been created and compared with measured noise levels. Subsequently a second traffic model, improving the input data used for the first one, has been created and validated according to the deviations. Different methodologies were also examined for collecting model input data that would be of higher quality, by analysing the improvement generated in the reduction in the uncertainty of the noise map introduced by the road traffic noise emission

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years significant efforts have been devoted to the development of advanced data analysis tools to both predict the occurrence of disruptions and to investigate the operational spaces of devices, with the long term goal of advancing the understanding of the physics of these events and to prepare for ITER. On JET the latest generation of the disruption predictor called APODIS has been deployed in the real time network during the last campaigns with the new metallic wall. Even if it was trained only with discharges with the carbon wall, it has reached very good performance, with both missed alarms and false alarms in the order of a few percent (and strategies to improve the performance have already been identified). Since for the optimisation of the mitigation measures, predicting also the type of disruption is considered to be also very important, a new clustering method, based on the geodesic distance on a probabilistic manifold, has been developed. This technique allows automatic classification of an incoming disruption with a success rate of better than 85%. Various other manifold learning tools, particularly Principal Component Analysis and Self Organised Maps, are also producing very interesting results in the comparative analysis of JET and ASDEX Upgrade (AUG) operational spaces, on the route to developing predictors capable of extrapolating from one device to another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tolls have increasingly become a common mechanism to fund road projects in recent decades. Therefore, improving knowledge of demand behavior constitutes a key aspect for stakeholders dealing with the management of toll roads. However, the literature concerning demand elasticity estimates for interurban toll roads is still limited due to their relatively scarce number in the international context. Furthermore, existing research has left some aspects to be investigated, among others, the choice of GDP as the most common socioeconomic variable to explain traffic growth over time. This paper intends to determine the variables that better explain the evolution of light vehicle demand in toll roads throughout the years. To that end, we establish a dynamic panel data methodology aimed at identifying the key socioeconomic variables explaining changes in light vehicle demand over time. The results show that, despite some usefulness, GDP does not constitute the most appropriate explanatory variable, while other parameters such as employment or GDP per capita lead to more stable and consistent results. The methodology is applied to Spanish toll roads for the 1990?2011 period, which constitutes a very interesting case on variations in toll road use, as road demand has experienced a significant decrease since the beginning of the economic crisis in 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contents: - Center for Open Middleware - POSDATA project - User modeling - Some early results - @posdata service

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En los últimos años la sociedad está experimentando una serie de cambios. Uno de estos cambios es la datificación (“datafication” en inglés). Este término puede ser definido como la transformación sistemática de aspectos de la vida cotidiana de las personas en datos procesados por ordenadores. Cada día, a cada minuto y a cada segundo, cada vez que alguien emplea un dispositivo digital,hay datos siendo guardados en algún lugar. Se puede tratar del contenido de un correo electrónico pero también puede ser el número de pasos que esa persona ha caminado o su historial médico. El simple almacenamiento de datos no proporciona un valor añadido por si solo. Para extraer conocimiento de los datos, y por tanto darles un valor, se requiere del análisis de datos. La ciencia de los datos junto con el análisis de datos se está volviendo cada vez más popular. Hoy en día, se pueden encontrar millones de web APIs estadísticas; estas APIs ofrecen la posibilidad de analizar tendencias o sentimientos presentes en las redes sociales o en internet en general. Una de las redes sociales más populares, Twitter, es pública. Cada mensaje, o tweet, publicado puede ser visto por cualquier persona en el mundo, siempre y cuando posea una conexión a internet. Esto hace de Twitter un medio interesante a la hora de analizar hábitos sociales o perfiles de consumo. Es en este contexto en que se engloba este proyecto. Este trabajo, combinando el análisis estadístico de datos y el análisis de contenido, trata de extraer conocimiento de tweets públicos de Twitter. En particular tratará de establecer si el género es un factor influyente en las relaciones entre usuarios de Twitter. Para ello, se analizará una base de datos que contiene casi 2.000 tweets. En primer lugar se determinará el género de los usuarios mediante web APIs. En segundo lugar se empleará el contraste de hipótesis para saber si el género influye en los usuarios a la hora de relacionarse con otros usuarios. Finalmente se construirá un modelo estadístico para predecir el comportamiento de los usuarios de Twitter en relación a su género.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce B2DI model that extends BDI model to perform Bayesian inference under uncertainty. For scalability and flexibility purposes, Multiply Sectioned Bayesian Network (MSBN) technology has been selected and adapted to BDI agent reasoning. A belief update mechanism has been defined for agents, whose belief models are connected by public shared beliefs, and the certainty of these beliefs is updated based on MSBN. The classical BDI agent architecture has been extended in order to manage uncertainty using Bayesian reasoning. The resulting extended model, so-called B2DI, proposes a new control loop. The proposed B2DI model has been evaluated in a network fault diagnosis scenario. The evaluation has compared this model with two previously developed agent models. The evaluation has been carried out with a real testbed diagnosis scenario using JADEX. As a result, the proposed model exhibits significant improvements in the cost and time required to carry out a reliable diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a numerical implementation of the cohesive crack model for the anal-ysis of quasibrittle materials based on the strong discontinuity approach in the framework of the finite element method. A simple central force model is used for the stress versus crack opening curve. The additional degrees of freedom defining the crack opening are determined at the crack level, thus avoiding the need for performing a static condensation at the element level. The need for a tracking algorithm is avoided by using a consistent pro-cedure for the selection of the separated nodes. Such a model is then implemented into a commercial program by means of a user subroutine, consequently being contrasted with the experimental results. The model takes into account the anisotropy of the material. Numerical simulations of well-known experiments are presented to show the ability of the proposed model to simulate the fracture of quasibrittle materials such as mortar, concrete and masonry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the Bayesian approach as the model selection criteria, the main purpose in this study is to establish a practical road accident model that can provide a better interpretation and prediction performance. For this purpose we are using a structural explanatory model with autoregressive error term. The model estimation is carried out through Bayesian inference and the best model is selected based on the goodness of fit measures. To cross validate the model estimation further prediction analysis were done. As the road safety measures the number of fatal accidents in Spain, during 2000-2011 were employed. The results of the variable selection process show that the factors explaining fatal road accidents are mainly exposure, economic factors, and surveillance and legislative measures. The model selection shows that the impact of economic factors on fatal accidents during the period under study has been higher compared to surveillance and legislative measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multi-channel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a methodology for reducing a straight line fitting regression problem to a Least Squares minimization one. This is accomplished through the definition of a measure on the data space that takes into account directional dependences of errors, and the use of polar descriptors for straight lines. This strategy improves the robustness by avoiding singularities and non-describable lines. The methodology is powerful enough to deal with non-normal bivariate heteroscedastic data error models, but can also supersede classical regression methods by making some particular assumptions. An implementation of the methodology for the normal bivariate case is developed and evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important competence of human data analysts is to interpret and explain the meaning of the results of data analysis to end-users. However, existing automatic solutions for intelligent data analysis provide limited help to interpret and communicate information to non-expert users. In this paper we present a general approach to generating explanatory descriptions about the meaning of quantitative sensor data. We propose a type of web application: a virtual newspaper with automatically generated news stories that describe the meaning of sensor data. This solution integrates a variety of techniques from intelligent data analysis into a web-based multimedia presentation system. We validated our approach in a real world problem and demonstrate its generality using data sets from several domains. Our experience shows that this solution can facilitate the use of sensor data by general users and, therefore, can increase the utility of sensor network infrastructures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.