891 resultados para rolling forecasting
Resumo:
A Bázel–2. tőkeegyezmény bevezetését követően a bankok és hitelintézetek Magyarországon is megkezdték saját belső minősítő rendszereik felépítését, melyek karbantartása és fejlesztése folyamatos feladat. A szerző arra a kérdésre keres választ, hogy lehetséges-e a csőd-előrejelző modellek előre jelző képességét növelni a hagyományos matematikai-statisztikai módszerek alkalmazásával oly módon, hogy a modellekbe a pénzügyi mutatószámok időbeli változásának mértékét is beépítjük. Az empirikus kutatási eredmények arra engednek következtetni, hogy a hazai vállalkozások pénzügyi mutatószámainak időbeli alakulása fontos információt hordoz a vállalkozás jövőbeli fizetőképességéről, mivel azok felhasználása jelentősen növeli a csődmodellek előre jelző képességét. A szerző azt is megvizsgálja, hogy javítja-e a megfigyelések szélsőségesen magas vagy alacsony értékeinek modellezés előtti korrekciója a modellek klasszifikációs teljesítményét. ______ Banks and lenders in Hungary also began, after the introduction of the Basel 2 capital agreement, to build up their internal rating systems, whose maintenance and development are a continuing task. The author explores whether it is possible to increase the predictive capacity of business-failure forecasting models by traditional mathematical-cum-statistical means in such a way that they incorporate the measure of change in the financial indicators over time. Empirical findings suggest that the temporal development of the financial indicators of firms in Hungary carries important information about future ability to pay, since the predictive capacity of bankruptcy forecasting models is greatly increased by using such indicators. The author also examines whether the classification performance of the models can be improved by correcting for extremely high or low values before modelling.
Resumo:
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.
Resumo:
Extensive data sets on water quality and seagrass distributions in Florida Bay have been assembled under complementary, but independent, monitoring programs. This paper presents the landscape-scale results from these monitoring programs and outlines a method for exploring the relationships between two such data sets. Seagrass species occurrence and abundance data were used to define eight benthic habitat classes from 677 sampling locations in Florida Bay. Water quality data from 28 monitoring stations spread across the Bay were used to construct a discriminant function model that assigned a probability of a given benthic habitat class occurring for a given combination of water quality variables. Mean salinity, salinity variability, the amount of light reaching the benthos, sediment depth, and mean nutrient concentrations were important predictor variables in the discriminant function model. Using a cross-validated classification scheme, this discriminant function identified the most likely benthic habitat type as the actual habitat type in most cases. The model predicted that the distribution of benthic habitat types in Florida Bay would likely change if water quality and water delivery were changed by human engineering of freshwater discharge from the Everglades. Specifically, an increase in the seasonal delivery of freshwater to Florida Bay should cause an expansion of seagrass beds dominated by Ruppia maritima and Halodule wrightii at the expense of the Thalassia testudinum-dominated community that now occurs in northeast Florida Bay. These statistical techniques should prove useful for predicting landscape-scale changes in community composition in diverse systems where communities are in quasi-equilibrium with environmental drivers.
Resumo:
Urban growth models have been used for decades to forecast urban development in metropolitan areas. Since the 1990s cellular automata, with simple computational rules and an explicitly spatial architecture, have been heavily utilized in this endeavor. One such cellular-automata-based model, SLEUTH, has been successfully applied around the world to better understand and forecast not only urban growth but also other forms of land-use and land-cover change, but like other models must be fed important information about which particular lands in the modeled area are available for development. Some of these lands are in categories for the purpose of excluding urban growth that are difficult to quantify since their function is dictated by policy. One such category includes voluntary differential assessment programs, whereby farmers agree not to develop their lands in exchange for significant tax breaks. Since they are voluntary, today’s excluded lands may be available for development at some point in the future. Mapping the shifting mosaic of parcels that are enrolled in such programs allows this information to be used in modeling and forecasting. In this study, we added information about California’s Williamson Act into SLEUTH’s excluded layer for Tulare County. Assumptions about the voluntary differential assessments were used to create a sophisticated excluded layer that was fed into SLEUTH’s urban growth forecasting routine. The results demonstrate not only a successful execution of this method but also yielded high goodness-of-fit metrics for both the calibration of enrollment termination as well as the urban growth modeling itself.
Resumo:
Background Type 2 diabetes mellitus (T2DM) is increasingly becoming a major public health problem worldwide. Estimating the future burden of diabetes is instrumental to guide the public health response to the epidemic. This study aims to project the prevalence of T2DM among adults in Syria over the period 2003–2022 by applying a modelling approach to the country’s own data. Methods Future prevalence of T2DM in Syria was estimated among adults aged 25 years and older for the period 2003–2022 using the IMPACT Diabetes Model (a discrete-state Markov model). Results According to our model, the prevalence of T2DM in Syria is projected to double in the period between 2003 and 2022 (from 10% to 21%). The projected increase in T2DM prevalence is higher in men (148%) than in women (93%). The increase in prevalence of T2DM is expected to be most marked in people younger than 55 years especially the 25–34 years age group. Conclusions The future projections of T2DM in Syria put it amongst countries with the highest levels of T2DM worldwide. It is estimated that by 2022 approximately a fifth of the Syrian population aged 25 years and older will have T2DM.
Resumo:
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.
Resumo:
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-fortime substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.
Resumo:
The purpose of this study is to adapt and combine the following methods of sales forecasting: Classical Time-Series Decomposition, Operationally Based Data and Judgmental Forecasting for use by military club managers.
Resumo:
The major objectives of this thesis were to determine if foam rolling had any effect on antagonist muscle activation and whether those changes would alter muscular co-activation patterns. The results from this thesis along with current literature will help clinicians to develop adequate exercise prescription for rehabilitative and pre-activity purposes. The existing literature has shown that foam rolling or roller massagers can increase range of motion (ROM), improve performance, and alter pain perception, however little research exists regarding changes in muscle activation following foam rolling. This study developed a reliable method for measuring muscle activation around the knee joint and using that method found that foam rolling the quadriceps can impair hamstrings muscle activation likely due to greater levels of perceived pain when rolling the quadriceps.
Resumo:
Peer reviewed
Resumo:
Election forecasting models assume retrospective economic voting and clear mechanisms of accountability. Previous research indeed indicates that incumbent political parties are being held accountable for the state of the economy. In this article we develop a ‘hard case’ for the assumptions of election forecasting models. Belgium is a multiparty system with perennial coalition governments. Furthermore, Belgium has two completely segregated party systems (Dutch and French language). Since the prime minister during the period 1974-2011 has always been a Dutch language politician, French language voters could not even vote for the prime minister, so this cognitive shortcut to establish political accountability is not available. Results of an analysis for the French speaking parties (1981-2010) show that even in these conditions of opaque accountability, retrospective economic voting occurs as election results respond to indicators with regard to GDP and unemployment levels. Party membership figures can be used to model the popularity function in election forecasting.
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.
Resumo:
Abstract Purpose The purpose of the study is to review recent studies published from 2007-2015 on tourism and hotel demand modeling and forecasting with a view to identifying the emerging topics and methods studied and to pointing future research directions in the field. Design/Methodology/approach Articles on tourism and hotel demand modeling and forecasting published in both science citation index (SCI) and social science citation index (SSCI) journals were identified and analyzed. Findings This review found that the studies focused on hotel demand are relatively less than those on tourism demand. It is also observed that more and more studies have moved away from the aggregate tourism demand analysis, while disaggregate markets and niche products have attracted increasing attention. Some studies have gone beyond neoclassical economic theory to seek additional explanations of the dynamics of tourism and hotel demand, such as environmental factors, tourist online behavior and consumer confidence indicators, among others. More sophisticated techniques such as nonlinear smooth transition regression, mixed-frequency modeling technique and nonparametric singular spectrum analysis have also been introduced to this research area. Research limitations/implications The main limitation of this review is that the articles included in this study only cover the English literature. Future review of this kind should also include articles published in other languages. The review provides a useful guide for researchers who are interested in future research on tourism and hotel demand modeling and forecasting. Practical implications This review provides important suggestions and recommendations for improving the efficiency of tourism and hospitality management practices. Originality/value The value of this review is that it identifies the current trends in tourism and hotel demand modeling and forecasting research and points out future research directions.
Resumo:
Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored. This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.