913 resultados para grade and tonnage models
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.
Resumo:
This map is designed as a resource for students and the public to use and develop a better understanding of the trails system on the Colby Campus. I used a Garmin GPSmap 60CS to chart all the trails on Runnals Hill and in the Arboretum. Then, using ArcGIS, I compiled the tracked trails and laid them over an aerial photo of the campus. Because many of the trails are hard to find, I took digital photos of each trail entry to help the user locate them. Then, by taking note of the grade and width of the trail, I decided which trails were suitable for certain activities. This gives users an idea of where to go for walking, running, mountain biking, cross-country skiing, and snowshoeing.
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
New business and technology platforms are required to sustainably manage urban water resources [1,2]. However, any proposed solutions must be cognisant of security, privacy and other factors that may inhibit adoption and hence impact. The FP7 WISDOM project (funded by the European Commission - GA 619795) aims to achieve a step change in water and energy savings via the integration of innovative Information and Communication Technologies (ICT) frameworks to optimize water distribution networks and to enable change in consumer behavior through innovative demand management and adaptive pricing schemes [1,2,3]. The WISDOM concept centres on the integration of water distribution, sensor monitoring and communication systems coupled with semantic modelling (using ontologies, potentially connected to BIM, to serve as intelligent linkages throughout the entire framework) and control capabilities to provide for near real-time management of urban water resources. Fundamental to this framework are the needs and operational requirements of users and stakeholders at domestic, corporate and city levels and this requires the interoperability of a number of demand and operational models, fed with data from diverse sources such as sensor networks and crowsourced information. This has implications regarding the provenance and trustworthiness of such data and how it can be used in not only the understanding of system and user behaviours, but more importantly in the real-time control of such systems. Adaptive and intelligent analytics will be used to produce decision support systems that will drive the ability to increase the variability of both supply and consumption [3]. This in turn paves the way for adaptive pricing incentives and a greater understanding of the water-energy nexus. This integration is complex and uncertain yet being typical of a cyber-physical system, and its relevance transcends the water resource management domain. The WISDOM framework will be modeled and simulated with initial testing at an experimental facility in France (AQUASIM – a full-scale test-bed facility to study sustainable water management), then deployed and evaluated in in two pilots in Cardiff (UK) and La Spezia (Italy). These demonstrators will evaluate the integrated concept providing insight for wider adoption.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.
Resumo:
This paper presents evidence on the key role of infrastructure in the Andean Community trade patterns. Three distinct but related gravity models of bilateral trade are used. The first model aims at identifying the importance of the Preferential Trade Agreement and adjacency on intra-regional trade, while also checking the traditional roles of economic size and distance. The second and third models also assess the evolution of the Trade Agreement and the importance of sharing a common border, but their main goal is to analyze the relevance of including infrastructure in the augmented gravity equation, testing the theoretical assumption that infrastructure endowments, by reducing trade and transport costs, reduce “distance” between bilateral partners. Indeed, if one accepts distance as a proxy for transportation costs, infrastructure development and improvement drastically modify it. Trade liberalization eliminates most of the distortions that a protectionist tariff system imposes on international business; hence transportation costs represent nowadays a considerably larger barrier to trade than in past decades. As new trade pacts are being negotiated in the Americas, borders and old agreements will lose significance; trade among countries will be nearly without restrictions, and bilateral flows will be defined in terms of costs and competitiveness. Competitiveness, however, will only be achieved by an improvement in infrastructure services at all points in the production-distribution chain.
Resumo:
This paper studies the Bankruptcy Law in Latin America, focusing on the Brazilian reform. We start with a review of the international literature and its evolution on this subject. Next, we examine the economic incentives associated with several aspects of bankruptcy laws and insolvency procedures in general, as well as the trade-offs involved. After this theoretical discussion, we evaluate empirically the current stage of the quality of insolvency procedures in Latin America using data from Doing Business and World Development Indicators, both from World Bank and International Financial Statistics from IMF. We find that the region is governed by an inefficient law, even when compared with regions of lower per capita income. As theoretical and econometric models predict, this inefficiency has severe consequences for credit markets and the cost of capital. Next, we focus on the recent Brazilian bankruptcy reform, analyzing its main changes and possible effects over the economic environment. The appendix describes difficulties of this process of reform in Brazil, and what other Latin American countries can possibly learn from it.
Resumo:
This paper evaluates the long-run effects of economic instability. In particular, we study the impact of idiosyncratic shocks to father’s income on children’s human capital accumulation variables such as school drop-outs, repetition rates and domestic and non-domestic labor. Although, the problem of child labor in Brazil has declined greatly during the last decade, the number of children working is still substantial. The low levels of educational attainment in Brazil are also a main cause for concern. The large rotating panel data set used allows for the estimation of the impacts of changes in occupational and income status of fathers on changes in his child’s time allocation circumstances. The empirical analysis is restricted to families with fathers, mothers and at least one child between 10 and 15 years of age in the main Brazilian metropolitan areas during the 1982-1999 period. We perform logistic regressions controlling for child characteristics (gender, age, if he/she is behind in school for age), parents characteristics (grade attainment and income) and time and location variables. The main variables analyzed are dynamic proxies of impulses and responses, namely: shocks to household head’s income and unemployment status, on the one hand and child’s probability of dropping out of school, of repeating a grade and of start working, on the other. The findings suggest that father’s income has a significant positive correlation with child’s dropping out of school and of repeating a grade. The findings do not suggest a significant relationship between a father’s becoming unemployed and a child entering the non-domestic labor market. However, the results demonstrate a significant positive relationship between a father becoming unemployed and a child beginning to work in domestic labor. There was also a positive correlation between father becoming unemployed and a child dropping out and repeating a grade. Both gender and age were highly significant with boys and older children being more likely to work, drop-out and repeat grades.
Resumo:
Behavioral finance, or behavioral economics, consists of a theoretical field of research stating that consequent psychological and behavioral variables are involved in financial activities such as corporate finance and investment decisions (i.e. asset allocation, portfolio management and so on). This field has known an increasing interest from scholar and financial professionals since episodes of multiple speculative bubbles and financial crises. Indeed, practical incoherencies between economic events and traditional neoclassical financial theories had pushed more and more researchers to look for new and broader models and theories. The purpose of this work is to present the field of research, still ill-known by a vast majority. This work is thus a survey that introduces its origins and its main theories, while contrasting them with traditional finance theories still predominant nowadays. The main question guiding this work would be to see if this area of inquiry is able to provide better explanations for real life market phenomenon. For that purpose, the study will present some market anomalies unsolved by traditional theories, which have been recently addressed by behavioral finance researchers. In addition, it presents a practical application of portfolio management, comparing asset allocation under the traditional Markowitz’s approach to the Black-Litterman model, which incorporates some features of behavioral finance.
Resumo:
O presente trabalho, requisito complementar para a obtenção do grau de Mestre em Educação, visa sugerir formas adicionais de controle do trabalho escolar, no ensino de 2o grau profissionalizante. Apoia-se na legislação vigente para a análise de incoerencias que estariam a frustar os objetivos daquele nível de ensino e identifica o Estágio Orientado na Empresa como a etapa onde deveria se processar uma mais efetiva reavaliaçao do trabalho escolar. Como contribuição à solução da questao, propoe um modelo de avaliação para esse tipo de estágio, destinado ao seu acompanhamento e controle, de forma a fazer do mesmo, um efetivo instrumento de integração Escola-Empresa em prol da educação profissionalizante. O estudo está estruturado em duas partes. A primeira compõe-se de dois capítulos: o Capítulo I introduz o tema, ressalta os seus objetivos e justifica a sua escolha, acrescentando, ainda, algumas definições destinadas a imprimir maior clareza contextual ao desenvolvimento do trabalho; o Capítulo 2 trata de melhor delimitar o estudo, quando apresenta algumas considerações sobre as Funções da Escola, o Trabalho Escolar e o Estágio Orientado na Empresa. A segunda parte, tambem com dois capítulos (3 e 4), trata, no Capítulo 3, da caracterização das Escolas Técnicas Federais: discorre sobre algumas disfunções ali praticadas; inicia um estudo exemplificativo destinado à Introdução do referido modelo de avaliação, e situa a posiçao alcançada por aquelas escolas no tocante à implantaçao do denominado Serviço de Integração Escola-Empresa. O Capítulo 4 é a proposta propriamente dita de avaliação e controle do trabalho escolar através de Estágio Orientado na Empresa. São apresentados os fatores e os parâmetros da avaliação pretendida para, entao, discorrer sobre a metodologia de aplicação, com detalhamentos que vao desde a classificação das atividades e tarefas dos estágios, até a sua análise final com os conceitos e os·pareceres. Como parte complementar desta proposta, é sugerida a estruturação de um organismo funcional, à semelhança de um BANCO DE DADOS, destinado à metodização de aproveitamento dos registros do processo de avaliação para o proveito da açao pedagógica. Como referência justificadora desta proposta de alteraçao dos dispositivos de avaliação do estágio, -e apresentado um levantamento-pesquisa preliminar, em forma de diagnóstico, desenvolvido na escola que serviu de fonte para o referido estudo exemplificativo. Trata-se de uma amostra da situação encontrada e, bastante, para justificar as alterações propostas, além de fornecer subsidios orientadores à sua elaboração. Os anexos, em número de 5, contem os modelos propostos para os principais instrumentos de avaliação do Estágio, bem como as instruções adicionais destinadas ao aluno. A Bibliografia consultada apoia-se basicamente na legislação em vigor para o ensino de 2o grau.
Resumo:
Asset allocation decisions and value at risk calculations rely strongly on volatility estimates. Volatility measures such as rolling window, EWMA, GARCH and stochastic volatility are used in practice. GARCH and EWMA type models that incorporate the dynamic structure of volatility and are capable of forecasting future behavior of risk should perform better than constant, rolling window volatility models. For the same asset the model that is the ‘best’ according to some criterion can change from period to period. We use the reality check test∗ to verify if one model out-performs others over a class of re-sampled time-series data. The test is based on re-sampling the data using stationary bootstrapping. For each re-sample we check the ‘best’ model according to two criteria and analyze the distribution of the performance statistics. We compare constant volatility, EWMA and GARCH models using a quadratic utility function and a risk management measurement as comparison criteria. No model consistently out-performs the benchmark.
Resumo:
A quantificação do risco país – e do risco político em particular – levanta várias dificuldades às empresas, instituições, e investidores. Como os indicadores econômicos são atualizados com muito menos freqüência do que o Facebook, compreender, e mais precisamente, medir – o que está ocorrendo no terreno em tempo real pode constituir um desafio para os analistas de risco político. No entanto, com a crescente disponibilidade de “big data” de ferramentas sociais como o Twitter, agora é o momento oportuno para examinar os tipos de métricas das ferramentas sociais que estão disponíveis e as limitações da sua aplicação para a análise de risco país, especialmente durante episódios de violência política. Utilizando o método qualitativo de pesquisa bibliográfica, este estudo identifica a paisagem atual de dados disponíveis a partir do Twitter, analisa os métodos atuais e potenciais de análise, e discute a sua possível aplicação no campo da análise de risco político. Depois de uma revisão completa do campo até hoje, e tendo em conta os avanços tecnológicos esperados a curto e médio prazo, este estudo conclui que, apesar de obstáculos como o custo de armazenamento de informação, as limitações da análise em tempo real, e o potencial para a manipulação de dados, os benefícios potenciais da aplicação de métricas de ferramentas sociais para o campo da análise de risco político, particularmente para os modelos qualitativos-estruturados e quantitativos, claramente superam os desafios.
Resumo:
Nowadays, more than half of the computer development projects fail to meet the final users' expectations. One of the main causes is insufficient knowledge about the organization of the enterprise to be supported by the respective information system. The DEMO methodology (Design and Engineering Methodology for Organizations) has been proved as a well-defined method to specify, through models and diagrams, the essence of any organization at a high level of abstraction. However, this methodology is platform implementation independent, lacking the possibility of saving and propagating possible changes from the organization models to the implemented software, in a runtime environment. The Universal Enterprise Adaptive Object Model (UEAOM) is a conceptual schema being used as a basis for a wiki system, to allow the modeling of any organization, independent of its implementation, as well as the previously mentioned change propagation in a runtime environment. Based on DEMO and UEAOM, this project aims to develop efficient and standardized methods, to enable an automatic conversion of DEMO Ontological Models, based on UEAOM specification into BPMN (Business Process Model and Notation) models of processes, using clear semantics, without ambiguities, in order to facilitate the creation of processes, almost ready for being executed on workflow systems that support BPMN.
Resumo:
Growth curves models provide a visual assessment of growth as a function of time, and prediction body weight at a specific age. This study aimed at estimating tinamous growth curve using different models, and at verifying their goodness of fit. A total number 11,639 weight records from 411 birds, being 6,671 from females and 3,095 from males, was analyzed. The highest estimates of a parameter were obtained using Brody (BD), von Bertalanffy (VB), Gompertz (GP,) and Logistic function (LG). Adult females were 5.7% heavier than males. The highest estimates of b parameter were obtained in the LG, GP, BID, and VB models. The estimated k parameter values in decreasing order were obtained in LG, GP, VB, and BID models. The correlation between the parameters a and k showed heavier birds are less precocious than the lighter. The estimates of intercept, linear regression coefficient, quadratic regression coefficient, and differences between quadratic coefficient of functions and estimated ties of quadratic-quadratic-quadratic segmented polynomials (QQQSP) were: 31.1732 +/- 2.41339; 3.07898 +/- 0.13287; 0.02689 +/- 0.00152; -0.05566 +/- 0.00193; 0.02349 +/- 0.00107, and 57 and 145 days, respectively. The estimated predicted mean error values (PME) of VB, GP, BID, LG, and QQQSP models were, respectively, 0.8353; 0.01715; -0.6939; -2.2453; and -0.7544%. The coefficient of determination (RI) and least square error values (MS) showed similar results. In conclusion, the VB and the QQQSP models adequately described tinamous growth. The best model to describe tinamous growth was the Gompertz model, because it presented the highest R-2 values, easiness of convergence, lower PME, and the easiness of parameter biological interpretation.