939 resultados para data-driven simulation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern IDEs provide limited support for developers when starting a new data-driven mobile app. App developers are currently required to write copious amounts of boilerplate code, scripts, organise complex directories, and author actual functionality. Although this scenario is ripe for automation, current tools are yet to address it adequately. In this paper we present RAPPT, a tool that generates the scaffolding of a mobile app based on a high level description specified in a Domain Specific Language (DSL). We demonstrate the feasibility of our approach by an example case study and feedback from a professional development team. Demo at: https://www.youtube.com/watch?v=ffquVgBYpLM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the service life of water supply network (WSN) growth, the growing phenomenon of aging pipe network has become exceedingly serious. As urban water supply network is hidden underground asset, it is difficult for monitoring staff to make a direct classification towards the faults of pipe network by means of the modern detecting technology. In this paper, based on the basic property data (e.g. diameter, material, pressure, distance to pump, distance to tank, load, etc.) of water supply network, decision tree algorithm (C4.5) has been carried out to classify the specific situation of water supply pipeline. Part of the historical data was used to establish a decision tree classification model, and the remaining historical data was used to validate this established model. Adopting statistical methods were used to access the decision tree model including basic statistical method, Receiver Operating Characteristic (ROC) and Recall-Precision Curves (RPC). These methods has been successfully used to assess the accuracy of this established classification model of water pipe network. The purpose of classification model was to classify the specific condition of water pipe network. It is important to maintain the pipeline according to the classification results including asset unserviceable (AU), near perfect condition (NPC) and serious deterioration (SD). Finally, this research focused on pipe classification which plays a significant role in maintaining water supply networks in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An underwater gas pipeline is the portion of the pipeline that crosses a river beneath its bottom. Underwater gas pipelines are subject to increasing dangers as time goes by. An accident at an underwater gas pipeline can lead to technological and environmental disaster on the scale of an entire region. Therefore, timely troubleshooting of all underwater gas pipelines in order to prevent any potential accidents will remain a pressing task for the industry. The most important aspect of resolving this challenge is the quality of the automated system in question. Now the industry doesn't have any automated system that fully meets the needs of the experts working in the field maintaining underwater gas pipelines. Principle Aim of this Research: This work aims to develop a new system of automated monitoring which would simplify the process of evaluating the technical condition and decision making on planning and preventive maintenance and repair work on the underwater gas pipeline. Objectives: Creation a shared model for a new, automated system via IDEF3; Development of a new database system which would store all information about underwater gas pipelines; Development a new application that works with database servers, and provides an explanation of the results obtained from the server; Calculation of the values MTBF for specified pipelines based on quantitative data obtained from tests of this system. Conclusion: The new, automated system PodvodGazExpert has been developed for timely and qualitative determination of the physical conditions of underwater gas pipeline; The basis of the mathematical analysis of this new, automated system uses principal component analysis method; The process of determining the physical condition of an underwater gas pipeline with this new, automated system increases the MTBF by a factor of 8.18 above the existing system used today in the industry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed energy and water balance models require time-series surfaces of the meteorological variables involved in hydrological processes. Most of the hydrological GIS-based models apply simple interpolation techniques to extrapolate the point scale values registered at weather stations at a watershed scale. In mountainous areas, where the monitoring network ineffectively covers the complex terrain heterogeneity, simple geostatistical methods for spatial interpolation are not always representative enough, and algorithms that explicitly or implicitly account for the features creating strong local gradients in the meteorological variables must be applied. Originally developed as a meteorological pre-processing tool for a complete hydrological model (WiMMed), MeteoMap has become an independent software. The individual interpolation algorithms used to approximate the spatial distribution of each meteorological variable were carefully selected taking into account both, the specific variable being mapped, and the common lack of input data from Mediterranean mountainous areas. They include corrections with height for both rainfall and temperature (Herrero et al., 2007), and topographic corrections for solar radiation (Aguilar et al., 2010). MeteoMap is a GIS-based freeware upon registration. Input data include weather station records and topographic data and the output consists of tables and maps of the meteorological variables at hourly, daily, predefined rainfall event duration or annual scales. It offers its own pre and post-processing tools, including video outlook, map printing and the possibility of exporting the maps to images or ASCII ArcGIS formats. This study presents the friendly user interface of the software and shows some case studies with applications to hydrological modeling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Guias para exploração mineral são normalmente baseados em modelos conceituais de depósitos. Esses guias são, normalmente, baseados na experiência dos geólogos, em dados descritivos e em dados genéticos. Modelamentos numéricos, probabilísticos e não probabilísticos, para estimar a ocorrência de depósitos minerais é um novo procedimento que vem a cada dia aumentando sua utilização e aceitação pela comunidade geológica. Essa tese utiliza recentes metodologias para a geração de mapas de favorablidade mineral. A denominada Ilha Cristalina de Rivera, uma janela erosional da Bacia do Paraná, situada na porção norte do Uruguai, foi escolhida como estudo de caso para a aplicação das metodologias. A construção dos mapas de favorabilidade mineral foi feita com base nos seguintes tipos de dados, informações e resultados de prospecção: 1) imagens orbitais; 2) prospecção geoquimica; 3) prospecção aerogeofísica; 4) mapeamento geo-estrutural e 5) altimetria. Essas informacões foram selecionadas e processadas com base em um modelo de depósito mineral (modelo conceitual), desenvolvido com base na Mina de Ouro San Gregorio. O modelo conceitual (modelo San Gregorio), incluiu características descritivas e genéticas da Mina San Gregorio, a qual abrange os elementos característicos significativos das demais ocorrências minerais conhecidas na Ilha Cristalina de Rivera. A geração dos mapas de favorabilidade mineral envolveu a construção de um banco de dados, o processamento dos dados, e a integração dos dados. As etapas de construção e processamento dos dados, compreenderam a coleta, a seleção e o tratamento dos dados de maneira a constituírem os denominados Planos de Informação. Esses Planos de Informação foram gerados e processados organizadamente em agrupamentos, de modo a constituírem os Fatores de Integração para o mapeamento de favorabilidade mineral na Ilha Cristalina de Rivera. Os dados foram integrados por meio da utilização de duas diferentes metodologias: 1) Pesos de Evidência (dirigida pelos dados) e 2) Lógica Difusa (dirigida pelo conhecimento). Os mapas de favorabilidade mineral resultantes da implementação das duas metodologias de integração foram primeiramente analisados e interpretados de maneira individual. Após foi feita uma análise comparativa entre os resultados. As duas metodologias xxiv obtiveram sucesso em identificar, como áreas de alta favorabilidade, as áreas mineralizadas conhecidas, além de outras áreas ainda não trabalhadas. Os mapas de favorabilidade mineral resultantes das duas metodologias mostraram-se coincidentes em relação as áreas de mais alta favorabilidade. A metodologia Pesos de Evidência apresentou o mapa de favorabilidade mineral mais conservador em termos de extensão areal, porém mais otimista em termos de valores de favorabilidade em comparação aos mapas de favorabilidade mineral resultantes da implementação da metodologia Lógica Difusa. Novos alvos para exploração mineral foram identificados e deverão ser objeto de investigação em detalhe.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nesta dissertação realizou-se um experimento de Monte Carlo para re- velar algumas características das distribuições em amostras finitas dos estimadores Backfitting (B) e de Integração Marginal(MI) para uma regressão aditiva bivariada. Está-se particularmente interessado em fornecer alguma evidência de como os diferentes métodos de seleção da janela hn, tais co- mo os métodos plug-in, impactam as propriedades em pequenas amostras dos estimadores. Está-se interessado, também, em fornecer evidência do comportamento de diferentes estimadores de hn relativamente a seqüência ótima de hn que minimiza uma função perda escolhida. O impacto de ignorar a dependência entre os regressores na estimação da janela é tam- bém investigado. Esta é uma prática comum e deve ter impacto sobre o desempenho dos estimadores. Além disso, não há nenhuma rotina atual- mente disponível nos pacotes estatísticos/econométricos para a estimação de regressões aditivas via os métodos de Backfitting e Integração Marginal. É um dos objetivos a criação de rotinas em Gauss para a implementação prática destes estimadores. Por fim, diferentemente do que ocorre atual- mente, quando a utilização dos estimadores-B e MI é feita de maneira completamente ad-hoc, há o objetivo de fornecer a usuários informação que permita uma escolha mais objetiva de qual estimador usar quando se está trabalhando com uma amostra finita.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research attempts to analyze the effects of open government data on the administration and practice of the educational process by comparing the contexts of Brazil and England. The findings illustrate two principal dynamics: control and collaboration. In the case of control, or what is called the "data-driven" paradigm, data help advance the cause of political accountability through the disclosure of school performance. In collaboration, or what is referred to as the "data-informed" paradigm, data is intended to support the decision-making process of administrators through dialogical processes with other social actors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of composite materials has increased in the recent decades, mainly in the aeronautics and automotives industries. In the present study is elaborated a computational simulation program of the bending test using the finite elements method, in the commercial software ANSYS. This simulation has the objective of analyze the mechanical behavior in bending of two composites with polymeric matrix reinforced with carbon fibers. Also are realized bending tests of the 3 points to obtain the resistances of the materials. Data from simulation and tests are used to make a comparison between two failures criteria, Tsai-Wu and Hashin criterion. Copyright © 2009 SAE International.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O processamento de voz tornou-se uma tecnologia cada vez mais baseada na modelagem automática de vasta quantidade de dados. Desta forma, o sucesso das pesquisas nesta área está diretamente ligado a existência de corpora de domínio público e outros recursos específicos, tal como um dicionário fonético. No Brasil, ao contrário do que acontece para a língua inglesa, por exemplo, não existe atualmente em domínio público um sistema de Reconhecimento Automático de Voz (RAV) para o Português Brasileiro com suporte a grandes vocabulários. Frente a este cenário, o trabalho tem como principal objetivo discutir esforços dentro da iniciativa FalaBrasil [1], criada pelo Laboratório de Processamento de Sinais (LaPS) da UFPA, apresentando pesquisas e softwares na área de RAV para o Português do Brasil. Mais especificamente, o presente trabalho discute a implementação de um sistema de reconhecimento de voz com suporte a grandes vocabulários para o Português do Brasil, utilizando a ferramenta HTK baseada em modelo oculto de Markov (HMM) e a criação de um módulo de conversão grafema-fone, utilizando técnicas de aprendizado de máquina.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O método de empilhamento Superfície de Reflexão Comum (SRC) foi originalmente introduzido como um método data-driven para simular seções afastamento-nulo a partir de dados sísmicos de reflexão pré-empilhados 2-D adquiridos ao longo de uma linha de aquisição reta. Este método está baseado em uma aproximação de tempos de trânsito hiperbólica de segunda ordem parametrizada com três atributos cinemáticos do campo de onda. Em dados terrestres, os efeitos topográficos desempenham um papel importante no processamento e imageamento de dados sísmicos. Assim, esta característica tem sido considerada recentemente pelo método SRC. Neste trabalho apresentamos uma revisão das aproximações de tempos de trânsito SRC que consideram topografia suave e rugosa. Adicionalmente, nós revemos também a aproximação de tempos de trânsito Multifoco para o caso da topografia rugosa. Por meio de um exemplo sintético simples, nós fornecemos finalmente as primeiras comparações entre as diferentes expressões de tempos de trânsito.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG