902 resultados para discrete and continuum models
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
New business and technology platforms are required to sustainably manage urban water resources [1,2]. However, any proposed solutions must be cognisant of security, privacy and other factors that may inhibit adoption and hence impact. The FP7 WISDOM project (funded by the European Commission - GA 619795) aims to achieve a step change in water and energy savings via the integration of innovative Information and Communication Technologies (ICT) frameworks to optimize water distribution networks and to enable change in consumer behavior through innovative demand management and adaptive pricing schemes [1,2,3]. The WISDOM concept centres on the integration of water distribution, sensor monitoring and communication systems coupled with semantic modelling (using ontologies, potentially connected to BIM, to serve as intelligent linkages throughout the entire framework) and control capabilities to provide for near real-time management of urban water resources. Fundamental to this framework are the needs and operational requirements of users and stakeholders at domestic, corporate and city levels and this requires the interoperability of a number of demand and operational models, fed with data from diverse sources such as sensor networks and crowsourced information. This has implications regarding the provenance and trustworthiness of such data and how it can be used in not only the understanding of system and user behaviours, but more importantly in the real-time control of such systems. Adaptive and intelligent analytics will be used to produce decision support systems that will drive the ability to increase the variability of both supply and consumption [3]. This in turn paves the way for adaptive pricing incentives and a greater understanding of the water-energy nexus. This integration is complex and uncertain yet being typical of a cyber-physical system, and its relevance transcends the water resource management domain. The WISDOM framework will be modeled and simulated with initial testing at an experimental facility in France (AQUASIM – a full-scale test-bed facility to study sustainable water management), then deployed and evaluated in in two pilots in Cardiff (UK) and La Spezia (Italy). These demonstrators will evaluate the integrated concept providing insight for wider adoption.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.
Resumo:
This paper presents evidence on the key role of infrastructure in the Andean Community trade patterns. Three distinct but related gravity models of bilateral trade are used. The first model aims at identifying the importance of the Preferential Trade Agreement and adjacency on intra-regional trade, while also checking the traditional roles of economic size and distance. The second and third models also assess the evolution of the Trade Agreement and the importance of sharing a common border, but their main goal is to analyze the relevance of including infrastructure in the augmented gravity equation, testing the theoretical assumption that infrastructure endowments, by reducing trade and transport costs, reduce “distance” between bilateral partners. Indeed, if one accepts distance as a proxy for transportation costs, infrastructure development and improvement drastically modify it. Trade liberalization eliminates most of the distortions that a protectionist tariff system imposes on international business; hence transportation costs represent nowadays a considerably larger barrier to trade than in past decades. As new trade pacts are being negotiated in the Americas, borders and old agreements will lose significance; trade among countries will be nearly without restrictions, and bilateral flows will be defined in terms of costs and competitiveness. Competitiveness, however, will only be achieved by an improvement in infrastructure services at all points in the production-distribution chain.
Resumo:
This paper studies the Bankruptcy Law in Latin America, focusing on the Brazilian reform. We start with a review of the international literature and its evolution on this subject. Next, we examine the economic incentives associated with several aspects of bankruptcy laws and insolvency procedures in general, as well as the trade-offs involved. After this theoretical discussion, we evaluate empirically the current stage of the quality of insolvency procedures in Latin America using data from Doing Business and World Development Indicators, both from World Bank and International Financial Statistics from IMF. We find that the region is governed by an inefficient law, even when compared with regions of lower per capita income. As theoretical and econometric models predict, this inefficiency has severe consequences for credit markets and the cost of capital. Next, we focus on the recent Brazilian bankruptcy reform, analyzing its main changes and possible effects over the economic environment. The appendix describes difficulties of this process of reform in Brazil, and what other Latin American countries can possibly learn from it.
Resumo:
Behavioral finance, or behavioral economics, consists of a theoretical field of research stating that consequent psychological and behavioral variables are involved in financial activities such as corporate finance and investment decisions (i.e. asset allocation, portfolio management and so on). This field has known an increasing interest from scholar and financial professionals since episodes of multiple speculative bubbles and financial crises. Indeed, practical incoherencies between economic events and traditional neoclassical financial theories had pushed more and more researchers to look for new and broader models and theories. The purpose of this work is to present the field of research, still ill-known by a vast majority. This work is thus a survey that introduces its origins and its main theories, while contrasting them with traditional finance theories still predominant nowadays. The main question guiding this work would be to see if this area of inquiry is able to provide better explanations for real life market phenomenon. For that purpose, the study will present some market anomalies unsolved by traditional theories, which have been recently addressed by behavioral finance researchers. In addition, it presents a practical application of portfolio management, comparing asset allocation under the traditional Markowitz’s approach to the Black-Litterman model, which incorporates some features of behavioral finance.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Asset allocation decisions and value at risk calculations rely strongly on volatility estimates. Volatility measures such as rolling window, EWMA, GARCH and stochastic volatility are used in practice. GARCH and EWMA type models that incorporate the dynamic structure of volatility and are capable of forecasting future behavior of risk should perform better than constant, rolling window volatility models. For the same asset the model that is the ‘best’ according to some criterion can change from period to period. We use the reality check test∗ to verify if one model out-performs others over a class of re-sampled time-series data. The test is based on re-sampling the data using stationary bootstrapping. For each re-sample we check the ‘best’ model according to two criteria and analyze the distribution of the performance statistics. We compare constant volatility, EWMA and GARCH models using a quadratic utility function and a risk management measurement as comparison criteria. No model consistently out-performs the benchmark.
Resumo:
A quantificação do risco país – e do risco político em particular – levanta várias dificuldades às empresas, instituições, e investidores. Como os indicadores econômicos são atualizados com muito menos freqüência do que o Facebook, compreender, e mais precisamente, medir – o que está ocorrendo no terreno em tempo real pode constituir um desafio para os analistas de risco político. No entanto, com a crescente disponibilidade de “big data” de ferramentas sociais como o Twitter, agora é o momento oportuno para examinar os tipos de métricas das ferramentas sociais que estão disponíveis e as limitações da sua aplicação para a análise de risco país, especialmente durante episódios de violência política. Utilizando o método qualitativo de pesquisa bibliográfica, este estudo identifica a paisagem atual de dados disponíveis a partir do Twitter, analisa os métodos atuais e potenciais de análise, e discute a sua possível aplicação no campo da análise de risco político. Depois de uma revisão completa do campo até hoje, e tendo em conta os avanços tecnológicos esperados a curto e médio prazo, este estudo conclui que, apesar de obstáculos como o custo de armazenamento de informação, as limitações da análise em tempo real, e o potencial para a manipulação de dados, os benefícios potenciais da aplicação de métricas de ferramentas sociais para o campo da análise de risco político, particularmente para os modelos qualitativos-estruturados e quantitativos, claramente superam os desafios.
Resumo:
Na modelagem de sistemas complexos, abordagens analíticas tradicionais com equações diferenciais muitas vezes resultam em soluções intratáveis. Para contornar este problema, Modelos Baseados em Agentes surgem como uma ferramenta complementar, onde o sistema é modelado a partir de suas entidades constituintes e interações. Mercados Financeiros são exemplos de sistemas complexos, e como tais, o uso de modelos baseados em agentes é aplicável. Este trabalho implementa um Mercado Financeiro Artificial composto por formadores de mercado, difusores de informações e um conjunto de agentes heterogêneos que negociam um ativo através de um mecanismo de Leilão Duplo Contínuo. Diversos aspectos da simulação são investigados para consolidar sua compreensão e assim contribuir com a concepção de modelos, onde podemos destacar entre outros: Diferenças do Leilão Duplo Contínuo contra o Discreto; Implicações da variação do spread praticado pelo Formador de Mercado; Efeito de Restrições Orçamentárias sobre os agentes e Análise da formação de preços na emissão de ofertas. Pensando na aderência do modelo com a realidade do mercado brasileiro, uma técnica auxiliar chamada Simulação Inversa, é utilizada para calibrar os parâmetros de entrada, de forma que trajetórias de preços simulados resultantes sejam próximas à séries de preços históricos observadas no mercado.
Resumo:
Nowadays, more than half of the computer development projects fail to meet the final users' expectations. One of the main causes is insufficient knowledge about the organization of the enterprise to be supported by the respective information system. The DEMO methodology (Design and Engineering Methodology for Organizations) has been proved as a well-defined method to specify, through models and diagrams, the essence of any organization at a high level of abstraction. However, this methodology is platform implementation independent, lacking the possibility of saving and propagating possible changes from the organization models to the implemented software, in a runtime environment. The Universal Enterprise Adaptive Object Model (UEAOM) is a conceptual schema being used as a basis for a wiki system, to allow the modeling of any organization, independent of its implementation, as well as the previously mentioned change propagation in a runtime environment. Based on DEMO and UEAOM, this project aims to develop efficient and standardized methods, to enable an automatic conversion of DEMO Ontological Models, based on UEAOM specification into BPMN (Business Process Model and Notation) models of processes, using clear semantics, without ambiguities, in order to facilitate the creation of processes, almost ready for being executed on workflow systems that support BPMN.
Resumo:
Growth curves models provide a visual assessment of growth as a function of time, and prediction body weight at a specific age. This study aimed at estimating tinamous growth curve using different models, and at verifying their goodness of fit. A total number 11,639 weight records from 411 birds, being 6,671 from females and 3,095 from males, was analyzed. The highest estimates of a parameter were obtained using Brody (BD), von Bertalanffy (VB), Gompertz (GP,) and Logistic function (LG). Adult females were 5.7% heavier than males. The highest estimates of b parameter were obtained in the LG, GP, BID, and VB models. The estimated k parameter values in decreasing order were obtained in LG, GP, VB, and BID models. The correlation between the parameters a and k showed heavier birds are less precocious than the lighter. The estimates of intercept, linear regression coefficient, quadratic regression coefficient, and differences between quadratic coefficient of functions and estimated ties of quadratic-quadratic-quadratic segmented polynomials (QQQSP) were: 31.1732 +/- 2.41339; 3.07898 +/- 0.13287; 0.02689 +/- 0.00152; -0.05566 +/- 0.00193; 0.02349 +/- 0.00107, and 57 and 145 days, respectively. The estimated predicted mean error values (PME) of VB, GP, BID, LG, and QQQSP models were, respectively, 0.8353; 0.01715; -0.6939; -2.2453; and -0.7544%. The coefficient of determination (RI) and least square error values (MS) showed similar results. In conclusion, the VB and the QQQSP models adequately described tinamous growth. The best model to describe tinamous growth was the Gompertz model, because it presented the highest R-2 values, easiness of convergence, lower PME, and the easiness of parameter biological interpretation.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)