30 resultados para Dynamic data analysis

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tolls have increasingly become a common mechanism to fund road projects in recent decades. Therefore, improving knowledge of demand behavior constitutes a key aspect for stakeholders dealing with the management of toll roads. However, the literature concerning demand elasticity estimates for interurban toll roads is still limited due to their relatively scarce number in the international context. Furthermore, existing research has left some aspects to be investigated, among others, the choice of GDP as the most common socioeconomic variable to explain traffic growth over time. This paper intends to determine the variables that better explain the evolution of light vehicle demand in toll roads throughout the years. To that end, we establish a dynamic panel data methodology aimed at identifying the key socioeconomic variables explaining changes in light vehicle demand over time. The results show that, despite some usefulness, GDP does not constitute the most appropriate explanatory variable, while other parameters such as employment or GDP per capita lead to more stable and consistent results. The methodology is applied to Spanish toll roads for the 1990?2011 period, which constitutes a very interesting case on variations in toll road use, as road demand has experienced a significant decrease since the beginning of the economic crisis in 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose the Seasonal Dynamic Factor Analysis (SeaDFA), an extension of Nonstationary Dynamic Factor Analysis, through which one can deal with dimensionality reduction in vectors of time series in such a way that both common and specific components are extracted. Furthermore, common factors are able to capture not only regular dynamics (stationary or not) but also seasonal ones, by means of the common factors following a multiplicative seasonal VARIMA(p, d, q) × (P, D, Q)s model. Additionally, a bootstrap procedure that does not need a backward representation of the model is proposed to be able to make inference for all the parameters in the model. A bootstrap scheme developed for forecasting includes uncertainty due to parameter estimation, allowing enhanced coverage of forecasting intervals. A challenging application is provided. The new proposed model and a bootstrap scheme are applied to an innovative subject in electricity markets: the computation of long-term point forecasts and prediction intervals of electricity prices. Several appendices with technical details, an illustrative example, and an additional table are available online as Supplementary Materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strong motion records obtained in instrumented short-span bridges show the importance of the abutments in the dynamic response of the structure. Existing models study the pier foundation influence but not the abutment performance. This work proposes two and three dimensional boundary element models in the frequency domain and studies the dimensionless dynamic stiffness of standard bridge abutments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years significant efforts have been devoted to the development of advanced data analysis tools to both predict the occurrence of disruptions and to investigate the operational spaces of devices, with the long term goal of advancing the understanding of the physics of these events and to prepare for ITER. On JET the latest generation of the disruption predictor called APODIS has been deployed in the real time network during the last campaigns with the new metallic wall. Even if it was trained only with discharges with the carbon wall, it has reached very good performance, with both missed alarms and false alarms in the order of a few percent (and strategies to improve the performance have already been identified). Since for the optimisation of the mitigation measures, predicting also the type of disruption is considered to be also very important, a new clustering method, based on the geodesic distance on a probabilistic manifold, has been developed. This technique allows automatic classification of an incoming disruption with a success rate of better than 85%. Various other manifold learning tools, particularly Principal Component Analysis and Self Organised Maps, are also producing very interesting results in the comparative analysis of JET and ASDEX Upgrade (AUG) operational spaces, on the route to developing predictors capable of extrapolating from one device to another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenon of self-induced vibrations of prismatic beams in a cross-flow has been studied for decades, but it is still of great interest due to their important effects in many different industrial applications. This paper presents the experimental study developed on a prismatic beam with H-section.The aim of this analysis is to add some additional insight into the behaviour of the flow around this type of bodies, in order to reduce galloping and even to avoid it. The influence of some relevant geometrical parameters that define the H-section on the translational galloping behaviour of these beams has been analysed. Wind loads coefficients have been measured through static wind tunnel tests and the Den Hartog criterion applied to elucidate the influence of geometrical parameters on the galloping properties of the bodies under consideration.These results have been completed with surface pressure distribution measurements and, besides, dynamic tests have been also performed to verify the static criterion. Finally, the morphology of the flow past the tested bodies has been visualised by using smoke visualization techniques. Since the rectangular section beam is a limiting case of the H-section configuration, the results here obtained are compared with the ones published in the literature concerning rectangular configurations; the agreement is satisfactory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contents: - Center for Open Middleware - POSDATA project - User modeling - Some early results - @posdata service

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En los últimos años la sociedad está experimentando una serie de cambios. Uno de estos cambios es la datificación (“datafication” en inglés). Este término puede ser definido como la transformación sistemática de aspectos de la vida cotidiana de las personas en datos procesados por ordenadores. Cada día, a cada minuto y a cada segundo, cada vez que alguien emplea un dispositivo digital,hay datos siendo guardados en algún lugar. Se puede tratar del contenido de un correo electrónico pero también puede ser el número de pasos que esa persona ha caminado o su historial médico. El simple almacenamiento de datos no proporciona un valor añadido por si solo. Para extraer conocimiento de los datos, y por tanto darles un valor, se requiere del análisis de datos. La ciencia de los datos junto con el análisis de datos se está volviendo cada vez más popular. Hoy en día, se pueden encontrar millones de web APIs estadísticas; estas APIs ofrecen la posibilidad de analizar tendencias o sentimientos presentes en las redes sociales o en internet en general. Una de las redes sociales más populares, Twitter, es pública. Cada mensaje, o tweet, publicado puede ser visto por cualquier persona en el mundo, siempre y cuando posea una conexión a internet. Esto hace de Twitter un medio interesante a la hora de analizar hábitos sociales o perfiles de consumo. Es en este contexto en que se engloba este proyecto. Este trabajo, combinando el análisis estadístico de datos y el análisis de contenido, trata de extraer conocimiento de tweets públicos de Twitter. En particular tratará de establecer si el género es un factor influyente en las relaciones entre usuarios de Twitter. Para ello, se analizará una base de datos que contiene casi 2.000 tweets. En primer lugar se determinará el género de los usuarios mediante web APIs. En segundo lugar se empleará el contraste de hipótesis para saber si el género influye en los usuarios a la hora de relacionarse con otros usuarios. Finalmente se construirá un modelo estadístico para predecir el comportamiento de los usuarios de Twitter en relación a su género.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a methodology is proposed to find the dynamic poles of a capacitive pressure transmitter in order to enhance and extend the online surveillance of this type of sensor based on the response time measurement by applying noise analysis techniques and the dynamic data system procedure. Several measurements taken from a pressurized water reactor have been analyzed. The methodology proposes an autoregressive fit whose order is determined by the sensor dynamic poles. Nevertheless, the signals that have been analyzed could not be filtered properly in order to remove the plant noise; thus, this was considered as an additional pair of complex conjugate poles. With this methodology we have come up with the numerical value of the sensor second real pole in spite of its low influence on the sensor dynamic response. This opens up a more accurate online sensor surveillance since the previous methods were achieved by considering one real pole only.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we address the problem of dynamic pricing to optimize the revenue coming from the sales of a limited inventory in a finite time-horizon. A priori, the demand is assumed to be unknown. The seller must learn on the fly. We first deal with the simplest case, involving only one class of product for sale. Furthermore the general situation is considered with a finite number of product classes for sale. In particular, a case in point is the sale of tickets for events related to culture and leisure; in this case, typically the tickets are sold months before the event, thus, uncertainty over actual demand levels is a very a common occurrence. We propose a heuristic strategy of adaptive dynamic pricing, based on experience gained from the past, taking into account, for each time period, the available inventory, the time remaining to reach the horizon, and the profit made in previous periods. In the computational simulations performed, the demand is updated dynamically based on the prices being offered, as well as on the remaining time and inventory. The simulations show a significant profit over the fixed-price strategy, confirming the practical usefulness of the proposed strategy. We develop a tool allowing us to test different dynamic pricing strategies designed to fit market conditions and seller s objectives, which will facilitate data analysis and decision-making in the face of the problem of dynamic pricing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this paper is the development and application of multivariate time series models for forecasting aggregated wind power production in a country or region. Nowadays, in Spain, Denmark or Germany there is an increasing penetration of this kind of renewable energy, somehow to reduce energy dependence on the exterior, but always linked with the increaseand uncertainty affecting the prices of fossil fuels. The disposal of accurate predictions of wind power generation is a crucial task both for the System Operator as well as for all the agents of the Market. However, the vast majority of works rarely onsider forecasting horizons longer than 48 hours, although they are of interest for the system planning and operation. In this paper we use Dynamic Factor Analysis, adapting and modifying it conveniently, to reach our aim: the computation of accurate forecasts for the aggregated wind power production in a country for a forecasting horizon as long as possible, particularly up to 60 days (2 months). We illustrate this methodology and the results obtained for real data in the leading country in wind power production: Denmark

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is the result of a project whose objective has been to develop and deploy a dashboard for sentiment analysis of football in Twitter based on web components and D3.js. To do so, a visualisation server has been developed in order to present the data obtained from Twitter and analysed with Senpy. This visualisation server has been developed with Polymer web components and D3.js. Data mining has been done with a pipeline between Twitter, Senpy and ElasticSearch. Luigi have been used in this process because helps building complex pipelines of batch jobs, so it has analysed all tweets and stored them in ElasticSearch. To continue, D3.js has been used to create interactive widgets that make data easily accessible, this widgets will allow the user to interact with them and �filter the most interesting data for him. Polymer web components have been used to make this dashboard according to Google's material design and be able to show dynamic data in widgets. As a result, this project will allow an extensive analysis of the social network, pointing out the influence of players and teams and the emotions and sentiments that emerge in a lapse of time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important competence of human data analysts is to interpret and explain the meaning of the results of data analysis to end-users. However, existing automatic solutions for intelligent data analysis provide limited help to interpret and communicate information to non-expert users. In this paper we present a general approach to generating explanatory descriptions about the meaning of quantitative sensor data. We propose a type of web application: a virtual newspaper with automatically generated news stories that describe the meaning of sensor data. This solution integrates a variety of techniques from intelligent data analysis into a web-based multimedia presentation system. We validated our approach in a real world problem and demonstrate its generality using data sets from several domains. Our experience shows that this solution can facilitate the use of sensor data by general users and, therefore, can increase the utility of sensor network infrastructures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Laminatedglass is composed of two glass layers and a thin intermediate PVB layer, strongly influencing PVB's viscoelastic behaviour its dynamic response. While natural frequencies are relatively easily identified even with simplified FE models, damping ratios are not identified with such an ease. In order to determine to what extent external factors influence dampingidentification, different tests have been carried out. The external factors considered, apart from temperature, are accelerometers, connection cables and the effect of the glass layers. To analyse the influence of the accelerometers and their connection cables a laser measuring device was employed considering three possibilities: sample without instrumentation, sample with the accelerometers fixed and sample completely instrumented. When the sample is completely instrumented, accelerometer readings are also analysed. To take into consideration the effect of the glass layers, tests were realised both for laminatedglass and monolithic samples. This paper presents in depth data analysis of the different configurations and establishes criteria for data acquisition when testing laminatedglass.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic pro­gramming (and more recently, constraint programming) resulting in quite capable paralle­lizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.