969 resultados para Forecasting model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoitteena oli tarkastella innovaatioiden leviämismallien ennustetarkkuuteen vaikuttavia tekijöitä. Tutkielmassa ennustettiin logistisella mallilla matkapuhelinliittymien leviämistä kolmessa Euroopan maassa: Suomessa, Ranskassa ja Kreikassa. Teoriaosa keskittyi innovaatioiden leviämisen ennustamiseen leviämismallien avulla. Erityisesti painotettiin mallien ennustuskykyä ja niiden käytettävyyttä eri tilanteissa. Empiirisessä osassa keskityttiin ennustamiseen logistisella leviämismallilla, joka kalibroitiin eri tavoin koostetuilla aikasarjoilla. Näin tehtyjä ennusteita tarkasteltiin tiedon kokoamistasojen vaikutusten selvittämiseksi. Tutkimusasetelma oli empiirinen, mikä sisälsi logistisen leviämismallin ennustetarkkuuden tutkimista otosdatan kokoamistasoa muunnellen. Leviämismalliin syötettävä data voidaan kerätä kuukausittain ja operaattorikohtaisesti vaikuttamatta ennustetarkkuuteen. Dataan on sisällytettävä leviämiskäyrän käännöskohta, eli pitkän aikavälin huippukysyntäpiste.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tarkoituksena oli mallintaa varastonhallintajärjestelmä, joka olisi sopiva case yritykselle. Tutkimus aloitettiin case yrityksen varastonhallinan nykytilan kartoituksella, jonka jälkeen tutkittiin varastonhallinnan eri osa-alueisiin. Varastonhallinnan osa-alueista käsiteltiin varastotyyppejä, motiiveja, tavoitteita, kysynnän ennustamista sekä erilaisia varastonhallinnan työkaluja. Sen lisäksi tutkittiin erilaisia varaston täydennysmalleja. Teoriaosuudessa käsiteltiin lisäksi kolmea erilaista tietojärjestelmätyyppiä: toiminnanohjausjärjestelmää, sähköisen kaupankäynnin järjestelmää sekä räätälöityä järjestelmää. Tutkimussuunnitelmassa nämä kolme järjestelmää rajattiin vaihtoehdoiksi, joista jokin valittaisiin case yrityksen varastonhallintajärjestelmäksi. Teorian ja nykytilan pohjalta tehtiin viitekehys, jossa esiteltiin varastonhallintajärjestelmän tieto- ja toiminnallisuusominaisuuksia. Nämä ominaisuudet priorisoitiin neljään eri luokkaan ominaisuuden kriittisyyden mukaan. Järjestelmävaihtoehdot arvioitiin viitekehyksen kriteerien mukaisesti, miten helposti ominaisuudet olisivat toteutettavissa eri vaihtoehdoissa. Tulokset laskettiin näiden arviointien perusteella, jonka jälkeen tulosten analysoinnissa huomattiin, että toiminnanohjausjärjestelmä sopisi parhaiten case yrityksen varastonhallintajärjestelmäksi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seaports play an important part in the wellbeing of a nation. Many nations are highly dependent on foreign trade and most trade is done using sea vessels. This study is part of a larger research project, where a simulation model is required in order to create further analyses on Finnish macro logistical networks. The objective of this study is to create a system dynamic simulation model, which gives an accurate forecast for the development of demand of Finnish seaports up to 2030. The emphasis on this study is to show how it is possible to create a detailed harbor demand System Dynamic model with the help of statistical methods. The used forecasting methods were ARIMA (autoregressive integrated moving average) and regression models. The created simulation model gives a forecast with confidence intervals and allows studying different scenarios. The building process was found to be a useful one and the built model can be expanded to be more detailed. Required capacity for other parts of the Finnish logistical system could easily be included in the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most motor bodily injury (BI) claims are settled by negotiation, with fewer than 5% of cases going to court. A well-defined negotiation strategy is thus very useful for insurance companies. In this paper we assume that the monetary compensation awarded in court is the upper amount to be offered by the insurer in the negotiation process. Using a real database, a log-linear model is implemented to estimate the maximal offer. Non-spherical disturbances are detected. Correlation occurs when various claims are settled in the same judicial verdict. Group wise heteroscedasticity is due to the influence of the forensic valuation on the final compensation amount. An alternative approximation based on generalized inference theory is applied to estimate confidence intervals on variance components, since classical interval estimates may be unreliable for datasets with unbalanced structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT In the present study, onion plants were tested under controlled conditions for the development of a climate model based on the influence of temperature (10, 15, 20 and 25°C) and leaf wetness duration (6, 12, 24 and 48 hours) on the severity of Botrytis leaf blight of onion caused by Botrytis squamosa. The relative lesion density was influenced by temperature and leaf wetness duration (P <0.05). The disease was most severe at 20°C. Data were subjected to nonlinear regression analysis. Beta generalized function was used to adjust severity and temperature data, while a logistic function was chosen to represent the effect of leaf wetness on the severity of Botrytis leaf blight. The response surface obtained by the product of two functions was expressed as ES = 0.008192 * (((x-5)1.01089) * ((30-x)1.19052)) * (0.33859/(1+3.77989 * exp (-0.10923*y))), where ES represents the estimated severity value (0.1); x, the temperature (°C); and y, the leaf wetness (in hours). This climate model should be validated under field conditions to verify its use as a computational system for the forecasting of Botrytis leaf blight in onion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choice of industrial development options and the relevant allocation of the research funds become more and more difficult because of the increasing R&D costs and pressure for shorter development period. Forecast of the research progress is based on the analysis of the publications activity in the field of interest as well as on the dynamics of its change. Moreover, allocation of funds is hindered by exponential growth in the number of publications and patents. Thematic clusters become more and more difficult to identify, and their evolution hard to follow. The existing approaches of research field structuring and identification of its development are very limited. They do not identify the thematic clusters with adequate precision while the identified trends are often ambiguous. Therefore, there is a clear need to develop methods and tools, which are able to identify developing fields of research. The main objective of this Thesis is to develop tools and methods helping in the identification of the promising research topics in the field of separation processes. Two structuring methods as well as three approaches for identification of the development trends have been proposed. The proposed methods have been applied to the analysis of the research on distillation and filtration. The results show that the developed methods are universal and could be used to study of the various fields of research. The identified thematic clusters and the forecasted trends of their development have been confirmed in almost all tested cases. It proves the universality of the proposed methods. The results allow for identification of the fast-growing scientific fields as well as the topics characterized by stagnant or diminishing research activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this Master’s thesis is to create a calculation model for working capital management in value chains. The study has been executed using literature review and constructive research methods. Constructive research methods were mainly modeling. The theory in this thesis is founded in research articles and management literature. The model is developed for students and researchers. They can use the model for working capital management and comparing firms to each other. The model can also be used to cash management. The model tells who benefits and who suffers most in the value chain. Companies and value chains cash flows can be seen. By using the model can be seen are the set targets really achieved. The amount of operational working capital can be observed. The model enables user to simulate the amount of working capital. The created model is based on cash conversion cycle, return on investment and cash flow forecasting. The model is tested with carefully considered figures which seem to be though realistic. The modeled value chain is literally a chain. Implementing this model requires from the user that he/she have some kind of understanding about working capital management and some figures from balance sheet and income statement. By using this model users can improve their knowledge about working capital management in value chains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this master’s thesis, wind speeds and directions were modeled with the aim of developing suitable models for hourly, daily, weekly and monthly forecasting. Artificial Neural Networks implemented in MATLAB software were used to perform the forecasts. Three main types of artificial neural network were built, namely: Feed forward neural networks, Jordan Elman neural networks and Cascade forward neural networks. Four sub models of each of these neural networks were also built, corresponding to the four forecast horizons, for both wind speeds and directions. A single neural network topology was used for each of the forecast horizons, regardless of the model type. All the models were then trained with real data of wind speeds and directions collected over a period of two years in the municipal region of Puumala in Finland. Only 70% of the data was used for training, validation and testing of the models, while the second last 15% of the data was presented to the trained models for verification. The model outputs were then compared to the last 15% of the original data, by measuring the mean square errors and sum square errors between them. Based on the results, the feed forward networks returned the lowest generalization errors for hourly, weekly and monthly forecasts of wind speeds; Jordan Elman networks returned the lowest errors when used for forecasting of daily wind speeds. Cascade forward networks gave the lowest errors when used for forecasting daily, weekly and monthly wind directions; Jordan Elman networks returned the lowest errors when used for hourly forecasting. The errors were relatively low during training of the models, but shot up upon simulation with new inputs. In addition, a combination of hyperbolic tangent transfer functions for both hidden and output layers returned better results compared to other combinations of transfer functions. In general, wind speeds were more predictable as compared to wind directions, opening up opportunities for further research into building better models for wind direction forecasting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The desire to create a statistical or mathematical model, which would allow predicting the future changes in stock prices, was born many years ago. Economists and mathematicians are trying to solve this task by applying statistical analysis and physical laws, but there are still no satisfactory results. The main reason for this is that a stock exchange is a non-stationary, unstable and complex system, which is influenced by many factors. In this thesis the New York Stock Exchange was considered as the system to be explored. A topological analysis, basic statistical tools and singular value decomposition were conducted for understanding the behavior of the market. Two methods for normalization of initial daily closure prices by Dow Jones and S&P500 were introduced and applied for further analysis. As a result, some unexpected features were identified, such as a shape of distribution of correlation matrix, a bulk of which is shifted to the right hand side with respect to zero. Also non-ergodicity of NYSE was confirmed graphically. It was shown, that singular vectors differ from each other by a constant factor. There are for certain results no clear conclusions from this work, but it creates a good basis for the further analysis of market topology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The case company in this study is a large industrial engineering company whose business is largely based on delivering a wide-range of engineering projects. The aim of this study is to create and develop a fairly simple Excel-based tool for the sales department. The tool’s main function is to estimate and visualize the profitability of various small projects. The study also aims to find out other possible and more long-term solutions for tackling the problem in the future. The study is highly constructive and descriptive as it focuses on the development task and in the creation of a new operating model. The developed tool focuses on estimating the profitability of the small orders of the selected project portfolio currently on the bidding-phase (prospects) and will help the case company in the monthly reporting of sales figures. The tool will analyse the profitability of a certain project by calculating its fixed and variable costs, then further the gross margin and operating profit. The bidding phase of small project is a phase that has not been covered fully by the existing tools within the case company. The project portfolio tool can be taken into use immediately within the case company and it will provide fairly accurate estimate of the profitability figures of the recently sold small projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces heat demand forecasting models which are generated by using data mining algorithms. The forecast spans one full day and this forecast can be used in regulating heat consumption of buildings. For training the data mining models, two years of heat consumption data from a case building and weather measurement data from Finnish Meteorological Institute are used. The thesis utilizes Microsoft SQL Server Analysis Services data mining tools in generating the data mining models and CRISP-DM process framework to implement the research. Results show that the built models can predict heat demand at best with mean average percentage errors of 3.8% for 24-h profile and 5.9% for full day. A deployment model for integrating the generated data mining models into an existing building energy management system is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Already one-third of the human population uses social media on a daily basis. The biggest social networking site Facebook has over billion monthly users. As a result, social media services are now recording unprecedented amount of data on human behavior. The phenomenon has certainly caught the attention of scholars, businesses and governments alike. Organizations around the globe are trying to explore new ways to benefit from the massive databases. One emerging field of research is the use of social media in forecasting. The goal is to use data gathered from online services to predict offline phenomena. Predicting the results of elections is a prominent example of forecasting with social media, but regardless of the numerous attempts, no reliable technique has been established. The objective of the research is to analyze how accurately the results of parliament elections can be forecasted using social media. The research examines whether Facebook “likes” can be effectively used for predicting the outcome of the Finnish parliament elections that took place in April 2015. First a tool for gathering data from Facebook was created. Then the data was used to create an electoral forecast. Finally, the forecast was compared with the official results of the elections. The data used in the research was gathered from the Facebook walls of all the candidates that were running for the parliament elections and had a valid Facebook page. The final sample represents 1131 candidates and over 750000 Facebook “likes”. The results indicate that creating a forecast solely based on Facebook “likes” is not accurate. The forecast model predicted very dramatic changes to the Finnish political landscape while the official results of the elections were rather moderate. However, a clear statistical relationship between “likes” and votes was discovered. In conclusion, it is apparent that citizens and other key actors of the society are using social media in an increasing rate. However, the volume of the data does not directly increase the quality of the forecast. In addition, the study faced several other limitations that should be addressed in future research. Nonetheless, discovering the positive correlation between “likes” and votes is valuable information that can be used in future studies. Finally, it is evident that Facebook “likes” are not accurate enough and a meaningful forecast would require additional parameters.