940 resultados para hybrid models
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
The data of four networks that can be used in carrying out comparative studies with methods for transmission network expansion planning are given. These networks are of various types and different levels of complexity. The main mathematical formulations used in transmission expansion studies-transportation models, hybrid models, DC power flow models, and disjunctive models are also summarised and compared. The main algorithm families are reviewed-both analytical, combinatorial and heuristic approaches. Optimal solutions are not yet known for some of the four networks when more accurate models (e.g. The DC model) are used to represent the power flow equations-the state of the art with regard to this is also summarised. This should serve as a challenge to authors searching for new, more efficient methods.
Resumo:
This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.
Resumo:
During the last years cities around the world have invested important quantities of money in measures for reducing congestion and car-trips. Investments which are nothing but potential solutions for the well-known urban sprawl phenomenon, also called the “development trap” that leads to further congestion and a higher proportion of our time spent in slow moving cars. Over the path of this searching for solutions, the complex relationship between urban environment and travel behaviour has been studied in a number of cases. The main question on discussion is, how to encourage multi-stop tours? Thus, the objective of this paper is to verify whether unobserved factors influence tour complexity. For this purpose, we use a data-base from a survey conducted in 2006-2007 in Madrid, a suitable case study for analyzing urban sprawl due to new urban developments and substantial changes in mobility patterns in the last years. A total of 943 individuals were interviewed from 3 selected neighbourhoods (CBD, urban and suburban). We study the effect of unobserved factors on trip frequency. This paper present the estimation of an hybrid model where the latent variable is called propensity to travel and the discrete choice model is composed by 5 alternatives of tour type. The results show that characteristics of the neighbourhoods in Madrid are important to explain trip frequency. The influence of land use variables on trip generation is clear and in particular the presence of commercial retails. Through estimation of elasticities and forecasting we determine to what extent land-use policy measures modify travel demand. Comparing aggregate elasticities with percentage variations, it can be seen that percentage variations could lead to inconsistent results. The result shows that hybrid models better explain travel behavior than traditional discrete choice models.
Resumo:
Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.
Resumo:
INTRODUCTION: Since the introduction of its QUT ePrints institutional repository of published research outputs, together with the world’s first mandate for author contributions to an institutional repository, Queensland University of Technology (QUT) has been a leader in support of green road open access. With QUT ePrints providing our mechanism for supporting the green road to open access, QUT has since then also continued to expand its secondary open access strategy supporting gold road open access, which is also designed to assist QUT researchers to maximise the accessibility and so impact of their research. ---------- METHODS: QUT Library has adopted the position of selectively supporting true gold road open access publishing by using the Library Resource Allocation budget to pay the author publication fees for QUT authors wishing to publish in the open access journals of a range of publishers including BioMed Central, Public Library of Science and Hindawi. QUT Library has been careful to support only true open access publishers and not those open access publishers with hybrid models which “double dip” by charging authors publication fees and libraries subscription fees for the same journal content. QUT Library has maintained a watch on the growing number of open access journals available from gold road open access publishers and their increased rate of success as measured by publication impact. ---------- RESULTS: This paper reports on the successes and challenges of QUT’s efforts to support true gold road open access publishers and promote these publishing strategy options to researchers at QUT. The number and spread of QUT papers submitted and published in the journals of each publisher is provided. Citation counts for papers and authors are also presented and analysed, with the intention of identifying the benefits to accessibility and research impact for early career and established researchers.---------- CONCLUSIONS: QUT Library is eager to continue and further develop support for this publishing strategy, and makes a number of recommendations to other research institutions, on how they can best achieve success with this strategy.
Resumo:
Critical literacy (CL) has been the subject of much debate in the Australian public and education arenas since 2002. Recently, this debate has dissipated as literacy education agendas and attendant policies shift to embrace more hybrid models and approaches to the teaching of senior English. This paper/presentation reports on the views expressed by four teachers of senior English about critical literacy and it’s relevance to students who are from culturally and linguistically diverse backgrounds who are learning English while undertaking senior studies in high school. Teachers’ understandings of critical literacy are important, esp. given the emphasis on Critical and Creative Thinking and Literacy as two of the General Capabilities underpinning the Australian national curriculum. Using critical discourse analysis, data from four specialist ESL teachers in two different schools were analysed for the ways in which these teachers construct critical literacy. While all four teachers indicated significant commitment to critical literacy as an approach to English language teaching, the understandings they articulated varied from providing forms of access to powerful genres, to rationalist approaches to interrogating text, to a type of ‘critical-aesthetic’ analysis of text construction. Implications are also discussed.
Resumo:
Enterprise Social Networks continue to be adopted by organisations looking to increase collaboration between employees, customers and industry partners. Offering a varied range of features and functionality, this technology can be distinguished by the underlying business models that providers of this software deploy. This study identifies and describes the different business models through an analysis of leading Enterprise Social Networks: Yammer, Chatter, SharePoint, Connections, Jive, Facebook and Twitter. A key contribution of this research is the identification of consumer and corporate models as extreme approaches. These findings align well with research on the adoption of Enterprise Social Networks that has discussed bottom-up and top-down approaches. Of specific interest are hybrid models that wrap a corporate model within a consumer model and may, therefore, provide synergies on both models. From a broader perspective, this can be seen as the merging of the corporate and consumer markets for IT products and services.
Resumo:
Gravity mediated supersymmetry breaking becomes comparable to gauge mediated supersymmetry breaking contributions when messenger masses are close to the GUT scale. By suitably arranging the gravity contributions, one can modify the soft supersymmetry breaking sector to generate a large stop mixing parameter and a light Higgs mass of 125 GeV. In this kind of hybrid models, however, the nice features of gauge mediation like flavor conservation, etc. are lost. To preserve the nice features, gravitational contributions should become important for lighter messenger masses and should be important only for certain fields. This is possible when the hidden sector contains multiple (at least two) spurions with hierarchical vacuum expectation values. In this case, the gravitational contributions can be organized to be ``just right.'' We present a complete model with two spurion hidden sector where the gravitational contribution is from a warped flavor model in a Randall-Sundrum setting. Along the way, we present simple expressions to handle renormalization group equations when supersymmetry is broken by two different sectors at two different scales.
Resumo:
O câncer de colo do útero persiste como um importante problema de saúde em todo o mundo, em particular nos países em desenvolvimento. Duas vacinas contra o papilomavirus humano (HPV) encontram-se atualmente disponíveis e aprovadas para uso em meninas adolescentes, antes do início da vida sexual: uma bivalente, contra os sorotipos 16 e 18 e outra quadrivalente, contra os sorotipos 6, 11, 16 e 18. Estes imunobiológicos têm por objetivo induzir uma imunidade contra o papilomavírus e, desta forma, atuar na prevenção primária do câncer do colo de útero. As avaliações econômicas podem ser um elemento que auxiliem nos processos de tomada de decisão sobre a incorporação da vacina em programas de imunização nacionais. Estas avaliações foram o objeto central deste trabalho, que teve como objetivo sintetizar as evidências procedentes de uma revisão sistemática da literatura de estudos de avaliação econômica da utilização da vacina contra o HPV em meninas adolescentes e pré-adolescentes. Foi realizada uma busca na literatura nas bases MEDLINE (via Pubmed), LILACS (via Bireme) e National Health Service Economic Evaluation Database (NHS EED) ate junho de 2010. Dois avaliadores, de forma independente, selecionaram estudos de avaliação econômica completa, que tivessem como foco a imunização para HPV em mulheres com as vacinas comercialmente disponíveis direcionada à população adolescente. Após a busca, 188 títulos foram identificados; destes, 39 estudos preencheram os critérios de elegibilidade e foram incluídos na revisão. Por tratar-se de uma revisão de avaliações econômicas, não foi realizada uma medida de síntese dos valores de relação incremental entre custos e efetividade. Os 39 artigos incluídos envolveram 51 avaliações econômicas em 26 países. Predominaram estudos de custo-utilidade (51%). Do ponto de vista da perspectiva da análise, predominou o dos sistemas de saúde (76,4%). A maioria dos trabalhos (94,9%) elegeu meninas, com idade entre 9 e 12 anos, como sua população alvo e desenvolveu simulações considerando imunidade para toda a vida (84,6%). Os modelos utilizados nos estudos foram do tipo Markov em 25 análises, de transmissão dinâmica em 11 e híbridos em 3. As análises de sensibilidade revelaram um conjunto de elementos de incerteza, uma parte significativa dos quais relacionados a aspectos vacinais: custos da vacina, duração da imunidade, necessidade de doses de reforço, eficácia vacinal e cobertura do programa. Estes elementos configuram uma área de especial atenção para futuros modelos que venham a ser desenvolvidos no Brasil para análises econômicas da vacinação contra o HPV.
Resumo:
There has been much interest in the area of model-based reasoning within the Artificial Intelligence community, particularly in its application to diagnosis and troubleshooting. The core issue in this thesis, simply put, is, model-based reasoning is fine, but whence the model? Where do the models come from? How do we know we have the right models? What does the right model mean anyway? Our work has three major components. The first component deals with how we determine whether a piece of information is relevant to solving a problem. We have three ways of determining relevance: derivational, situational and an order-of-magnitude reasoning process. The second component deals with the defining and building of models for solving problems. We identify these models, determine what we need to know about them, and importantly, determine when they are appropriate. Currently, the system has a collection of four basic models and two hybrid models. This collection of models has been successfully tested on a set of fifteen simple kinematics problems. The third major component of our work deals with how the models are selected.
Resumo:
The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.
Resumo:
Instrumentation and automation plays a vital role to managing the water industry. These systems generate vast amounts of data that must be effectively managed in order to enable intelligent decision making. Time series data management software, commonly known as data historians are used for collecting and managing real-time (time series) information. More advanced software solutions provide a data infrastructure or utility wide Operations Data Management System (ODMS) that stores, manages, calculates, displays, shares, and integrates data from multiple disparate automation and business systems that are used daily in water utilities. These ODMS solutions are proven and have the ability to manage data from smart water meters to the collaboration of data across third party corporations. This paper focuses on practical, utility successes in the water industry where utility managers are leveraging instantaneous access to data from proven, commercial off-the-shelf ODMS solutions to enable better real-time decision making. Successes include saving $650,000 / year in water loss control, safeguarding water quality, saving millions of dollars in energy management and asset management. Immediate opportunities exist to integrate the research being done in academia with these ODMS solutions in the field and to leverage these successes to utilities around the world.