974 resultados para Data interpretation, statistical


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The repertoire of statistical methods dealing with the descriptive analysis of the burden of a disease has been expanded and implemented in statistical software packages during the last years. The purpose of this paper is to present a web-based tool, REGSTATTOOLS http://regstattools.net intended to provide analysis for the burden of cancer, or other group of disease registry data. Three software applications are included in REGSTATTOOLS: SART (analysis of disease"s rates and its time trends), RiskDiff (analysis of percent changes in the rates due to demographic factors and risk of developing or dying from a disease) and WAERS (relative survival analysis). Results: We show a real-data application through the assessment of the burden of tobacco-related cancer incidence in two Spanish regions in the period 1995-2004. Making use of SART we show that lung cancer is the most common cancer among those cancers, with rising trends in incidence among women. We compared 2000-2004 data with that of 1995-1999 to assess percent changes in the number of cases as well as relative survival using RiskDiff and WAERS, respectively. We show that the net change increase in lung cancer cases among women was mainly attributable to an increased risk of developing lung cancer, whereas in men it is attributable to the increase in population size. Among men, lung cancer relative survival was higher in 2000-2004 than in 1995-1999, whereas it was similar among women when these time periods were compared. Conclusions: Unlike other similar applications, REGSTATTOOLS does not require local software installation and it is simple to use, fast and easy to interpret. It is a set of web-based statistical tools intended for automated calculation of population indicators that any professional in health or social sciences may require.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: A previous individual patient data meta-analysis by the Meta-Analysis of Chemotherapy in Nasopharynx Carcinoma (MAC-NPC) collaborative group to assess the addition of chemotherapy to radiotherapy showed that it improves overall survival in nasopharyngeal carcinoma. This benefit was restricted to patients receiving concomitant chemotherapy and radiotherapy. The aim of this study was to update the meta-analysis, include recent trials, and to analyse separately the benefit of concomitant plus adjuvant chemotherapy. METHODS: We searched PubMed, Web of Science, Cochrane Controlled Trials meta-register, ClinicalTrials.gov, and meeting proceedings to identify published or unpublished randomised trials assessing radiotherapy with or without chemotherapy in patients with non-metastatic nasopharyngeal carcinoma and obtained updated data for previously analysed studies. The primary endpoint of interest was overall survival. All trial results were combined and analysed using a fixed-effects model. The statistical analysis plan was pre-specified in a protocol. All data were analysed on an intention-to-treat basis. FINDINGS: We analysed data from 19 trials and 4806 patients. Median follow-up was 7·7 years (IQR 6·2-11·9). We found that the addition of chemotherapy to radiotherapy significantly improved overall survival (hazard ratio [HR] 0·79, 95% CI 0·73-0·86, p<0·0001; absolute benefit at 5 years 6·3%, 95% CI 3·5-9·1). The interaction between treatment effect (benefit of chemotherapy) on overall survival and the timing of chemotherapy was significant (p=0·01) in favour of concomitant plus adjuvant chemotherapy (HR 0·65, 0·56-0·76) and concomitant without adjuvant chemotherapy (0·80, 0·70-0·93) but not adjuvant chemotherapy alone (0·87, 0·68-1·12) or induction chemotherapy alone (0·96, 0·80-1·16). The benefit of the addition of chemotherapy was consistent for all endpoints analysed (all p<0·0001): progression-free survival (HR 0·75, 95% CI 0·69-0·81), locoregional control (0·73, 0·64-0·83), distant control (0·67, 0·59-0·75), and cancer mortality (0·76, 0·69-0·84). INTERPRETATION: Our results confirm that the addition of concomitant chemotherapy to radiotherapy significantly improves survival in patients with locoregionally advanced nasopharyngeal carcinoma. To our knowledge, this is the first analysis that examines the effect of concomitant chemotherapy with and without adjuvant chemotherapy as distinct groups. Further studies on the specific benefits of adjuvant chemotherapy after concomitant chemoradiotherapy are needed. FUNDING: French Ministry of Health (Programme d'actions intégrées de recherche VADS), Ligue Nationale Contre le Cancer, and Sanofi-Aventis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Annonaceae includes cultivated species of economic interest and represents an important source of information for better understanding the evolution of tropical rainforests. In phylogenetic analyses of DNA sequence data that are used to address evolutionary questions, it is imperative to use appropriate statistical models. Annonaceae are cases in point: Two sister clades, the subfamilies Annonoideae and Malmeoideae, contain the majority of Annonaceae species diversity. The Annonoideae generally show a greater degree of sequence divergence compared to the Malmeoideae, resulting in stark differences in branch lengths in phylogenetic trees. Uncertainty in how to interpret and analyse these differences has led to inconsistent results when estimating the ages of clades in Annonaceae using molecular dating techniques. We ask whether these differences may be attributed to inappropriate modelling assumptions in the phylogenetic analyses. Specifically, we test for (clade-specific) differences in rates of non-synonymous and synonymous substitutions. A high ratio of nonsynonymous to synonymous substitutions may lead to similarity of DNA sequences due to convergence instead of common ancestry, and as a result confound phylogenetic analyses. We use a dataset of three chloroplast genes (rbcL, matK, ndhF) for 129 species representative of the family. We find that differences in branch lengths between major clades are not attributable to different rates of non-synonymous and synonymous substitutions. The differences in evolutionary rate between the major clades of Annonaceae pose a challenge for current molecular dating techniques that should be seen as a warning for the interpretation of such results in other organisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää, miten tilinpäätösanalyysi työkaluna soveltuu ICT-yritysten menestyksen arviointiin. Samalla pyritään tilinpäätösanalyysin avulla arvioimaan Kaakkois-Suomen ICT-klusterin taloudellinen tilanne. Tutkielman teoriaosassa tutustutaan uuden talouden erityispiirteiden vaikutuksiin tilinpäätösinformaation ja tilinpäätösanalyysin tulosten tulkinnassa. Empiirinen aineisto koostuu Kaakkois-Suomen ICT-klusterin tilinpäätöksistä, joista lasketaan valitut tunnusluvut. Näiden tunnuslukujen avulla pyritään arvioimaan yritysten menestystä ja selvittämään menestykseen vaikuttavia tekijöitä taloudellisesta näkökulmasta. Tutkimus on kvalitatiivinen ja tutkimusotteeltaan deskriptiivinen eli kuvaileva ja selittävä. Tutkimustulosten mukaan tilinpäätösanalyysi ei ole riittävä työkalu ICT-yritysten menestyksen arviointiin. Sen avulla on mahdollista kerätä yrityksistä pohjatietoa, jota tulee muiden keinojen mm. yrityshaastattelujen avulla täydentää. Tilinpäätösanalyysi osoittaa, että Kaakkois-Suomen ICT-yritysten taloudellinen tilanne on keskimääräisesti hyvä, mutta tulevaisuutta on mahdoton ennustaa. Koska kyseessä ei ole tilastollinen tutkimus, eikä kohdeyritysjoukko ole riittävä, ei tuloksia voi yleistää koskemaan kaikkia toimialan yrityksiä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Artemisinin-resistant Plasmodium falciparum has emerged in the Greater Mekong sub-region and poses a major global public health threat. Slow parasite clearance is a key clinical manifestation of reduced susceptibility to artemisinin. This study was designed to establish the baseline values for clearance in patients from Sub-Saharan African countries with uncomplicated malaria treated with artemisinin-based combination therapies (ACTs). METHODS: A literature review in PubMed was conducted in March 2013 to identify all prospective clinical trials (uncontrolled trials, controlled trials and randomized controlled trials), including ACTs conducted in Sub-Saharan Africa, between 1960 and 2012. Individual patient data from these studies were shared with the WorldWide Antimalarial Resistance Network (WWARN) and pooled using an a priori statistical analytical plan. Factors affecting early parasitological response were investigated using logistic regression with study sites fitted as a random effect. The risk of bias in included studies was evaluated based on study design, methodology and missing data. RESULTS: In total, 29,493 patients from 84 clinical trials were included in the analysis, treated with artemether-lumefantrine (n = 13,664), artesunate-amodiaquine (n = 11,337) and dihydroartemisinin-piperaquine (n = 4,492). The overall parasite clearance rate was rapid. The parasite positivity rate (PPR) decreased from 59.7 % (95 % CI: 54.5-64.9) on day 1 to 6.7 % (95 % CI: 4.8-8.7) on day 2 and 0.9 % (95 % CI: 0.5-1.2) on day 3. The 95th percentile of observed day 3 PPR was 5.3 %. Independent risk factors predictive of day 3 positivity were: high baseline parasitaemia (adjusted odds ratio (AOR) = 1.16 (95 % CI: 1.08-1.25); per 2-fold increase in parasite density, P <0.001); fever (>37.5 °C) (AOR = 1.50 (95 % CI: 1.06-2.13), P = 0.022); severe anaemia (AOR = 2.04 (95 % CI: 1.21-3.44), P = 0.008); areas of low/moderate transmission setting (AOR = 2.71 (95 % CI: 1.38-5.36), P = 0.004); and treatment with the loose formulation of artesunate-amodiaquine (AOR = 2.27 (95 % CI: 1.14-4.51), P = 0.020, compared to dihydroartemisinin-piperaquine). CONCLUSIONS: The three ACTs assessed in this analysis continue to achieve rapid early parasitological clearance across the sites assessed in Sub-Saharan Africa. A threshold of 5 % day 3 parasite positivity from a minimum sample size of 50 patients provides a more sensitive benchmark in Sub-Saharan Africa compared to the current recommended threshold of 10 % to trigger further investigation of artemisinin susceptibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional reconstruction of reservoir analogues can be improved combining data from different geophysical methods. Ground Penetrating Radar (GPR) and Electrical Resistivity Tomography (ERT) data are valuable tools, since they provide subsurface information from internal architecture and facies distribution of sedimentary rock bodies, enabling the upgrading of depositional models and heterogeneity reconstruction. The Lower Eocene Roda Sandstone is a well-known deltaic complex widely studied as a reservoir analogue that displays a series of sandstone wedges with a general NE to SW progradational trend. To provide a better understanding of internal heterogeneity of a 10m-thick progradational delta-front sandstone unit, 3D GPR data were acquired. In addition, common midpoints (CMP) to measure the sandstone subsoil velocity, test profiles with different frequency antennas (25, 50 and 100MHz) and topographic data for subsequent correction in the geophysical data were also obtained. Three ERT profiles were also acquired to further constrain GPR analysis. These geophysical results illustrate the geometry of reservoir analogue heterogeneities both depositional and diagenetic in nature, improving and complementing previous outcrop-derived data. GPR interpretation using radar stratigraphy principles and attributes analysis provided: 1)tridimensional geometry of major stratigraphic surfaces that define four units in the GPR Prism, 2) image the internal architecture of the units and their statistical study of azimuth and dips, useful for a quick determination of paleocurrent directions. These results were used to define the depositional architecture of the progradational sandbody that shows an arrangement in very-high-frequency sequences characterized by clockwise paleocurrent variations and decrease of the sedimentary flow, similar to those observed at a greater scale in the same system. This high-frequency sequential arrangement has been attributed to the autocyclic dynamics of a supply-dominated delta- front where fluvial and tidal currents are in competition. The resistivity models enhanced the viewing of reservoir quality associated with cement distribution caused by depositional and early diagenetic processes related to the development of transgressive and regressive systems tracts in igh-frequency sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Species Distribution Models (SDMs) are a widely used tool. Using different statistical approaches these models reconstruct the realized niche of a species using presence data and a set of variables, often topoclimatic. There utilization range is quite large from understanding single species requirements, to the creation of nature reserve based on species hotspots, or modeling of climate change impact, etc... Most of the time these models are using variables at a resolution of 50km x 50km or 1 km x 1 km. However in some cases these models are used with resolutions below the kilometer scale and thus called high resolution models (100 m x 100 m or 25 m x 25 m). Quite recently a new kind of data has emerged enabling precision up to lm x lm and thus allowing very high resolution modeling. However these new variables are very costly and need an important amount of time to be processed. This is especially the case when these variables are used in complex calculation like models projections over large areas. Moreover the importance of very high resolution data in SDMs has not been assessed yet and is not well understood. Some basic knowledge on what drive species presence-absences is still missing. Indeed, it is not clear whether in mountain areas like the Alps coarse topoclimatic gradients are driving species distributions or if fine scale temperature or topography are more important or if their importance can be neglected when balance to competition or stochasticity. In this thesis I investigated the importance of very high resolution data (2-5m) in species distribution models using either very high resolution topographic, climatic or edaphic variables over a 2000m elevation gradient in the Western Swiss Alps. I also investigated more local responses of these variables for a subset of species living in this area at two precise elvation belts. During this thesis I showed that high resolution data necessitates very good datasets (species and variables for the models) to produce satisfactory results. Indeed, in mountain areas, temperature is the most important factor driving species distribution and needs to be modeled at very fine resolution instead of being interpolated over large surface to produce satisfactory results. Despite the instinctive idea that topographic should be very important at high resolution, results are mitigated. However looking at the importance of variables over a large gradient buffers the importance of the variables. Indeed topographic factors have been shown to be highly important at the subalpine level but their importance decrease at lower elevations. Wether at the mountane level edaphic and land use factors are more important high resolution topographic data is more imporatant at the subalpine level. Finally the biggest improvement in the models happens when edaphic variables are added. Indeed, adding soil variables is of high importance and variables like pH are overpassing the usual topographic variables in SDMs in term of importance in the models. To conclude high resolution is very important in modeling but necessitate very good datasets. Only increasing the resolution of the usual topoclimatic predictors is not sufficient and the use of edaphic predictors has been highlighted as fundamental to produce significantly better models. This is of primary importance, especially if these models are used to reconstruct communities or as basis for biodiversity assessments. -- Ces dernières années, l'utilisation des modèles de distribution d'espèces (SDMs) a continuellement augmenté. Ces modèles utilisent différents outils statistiques afin de reconstruire la niche réalisée d'une espèce à l'aide de variables, notamment climatiques ou topographiques, et de données de présence récoltées sur le terrain. Leur utilisation couvre de nombreux domaines allant de l'étude de l'écologie d'une espèce à la reconstruction de communautés ou à l'impact du réchauffement climatique. La plupart du temps, ces modèles utilisent des occur-rences issues des bases de données mondiales à une résolution plutôt large (1 km ou même 50 km). Certaines bases de données permettent cependant de travailler à haute résolution, par conséquent de descendre en dessous de l'échelle du kilomètre et de travailler avec des résolutions de 100 m x 100 m ou de 25 m x 25 m. Récemment, une nouvelle génération de données à très haute résolution est apparue et permet de travailler à l'échelle du mètre. Les variables qui peuvent être générées sur la base de ces nouvelles données sont cependant très coûteuses et nécessitent un temps conséquent quant à leur traitement. En effet, tout calcul statistique complexe, comme des projections de distribution d'espèces sur de larges surfaces, demande des calculateurs puissants et beaucoup de temps. De plus, les facteurs régissant la distribution des espèces à fine échelle sont encore mal connus et l'importance de variables à haute résolution comme la microtopographie ou la température dans les modèles n'est pas certaine. D'autres facteurs comme la compétition ou la stochasticité naturelle pourraient avoir une influence toute aussi forte. C'est dans ce contexte que se situe mon travail de thèse. J'ai cherché à comprendre l'importance de la haute résolution dans les modèles de distribution d'espèces, que ce soit pour la température, la microtopographie ou les variables édaphiques le long d'un important gradient d'altitude dans les Préalpes vaudoises. J'ai également cherché à comprendre l'impact local de certaines variables potentiellement négligées en raison d'effets confondants le long du gradient altitudinal. Durant cette thèse, j'ai pu monter que les variables à haute résolution, qu'elles soient liées à la température ou à la microtopographie, ne permettent qu'une amélioration substantielle des modèles. Afin de distinguer une amélioration conséquente, il est nécessaire de travailler avec des jeux de données plus importants, tant au niveau des espèces que des variables utilisées. Par exemple, les couches climatiques habituellement interpolées doivent être remplacées par des couches de température modélisées à haute résolution sur la base de données de terrain. Le fait de travailler le long d'un gradient de température de 2000m rend naturellement la température très importante au niveau des modèles. L'importance de la microtopographie est négligeable par rapport à la topographie à une résolution de 25m. Cependant, lorsque l'on regarde à une échelle plus locale, la haute résolution est une variable extrêmement importante dans le milieu subalpin. À l'étage montagnard par contre, les variables liées aux sols et à l'utilisation du sol sont très importantes. Finalement, les modèles de distribution d'espèces ont été particulièrement améliorés par l'addition de variables édaphiques, principalement le pH, dont l'importance supplante ou égale les variables topographique lors de leur ajout aux modèles de distribution d'espèces habituels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a very volatile industry of high technology it is of utmost importance to accurately forecast customers’ demand. However, statistical forecasting of sales, especially in heavily competitive electronics product business, has always been a challenging task due to very high variation in demand and very short product life cycles of products. The purpose of this thesis is to validate if statistical methods can be applied to forecasting sales of short life cycle electronics products and provide a feasible framework for implementing statistical forecasting in the environment of the case company. Two different approaches have been developed for forecasting on short and medium term and long term horizons. Both models are based on decomposition models, but differ in interpretation of the model residuals. For long term horizons residuals are assumed to represent white noise, whereas for short and medium term forecasting horizon residuals are modeled using statistical forecasting methods. Implementation of both approaches is performed in Matlab. Modeling results have shown that different markets exhibit different demand patterns and therefore different analytical approaches are appropriate for modeling demand in these markets. Moreover, the outcomes of modeling imply that statistical forecasting can not be handled separately from judgmental forecasting, but should be perceived only as a basis for judgmental forecasting activities. Based on modeling results recommendations for further deployment of statistical methods in sales forecasting of the case company are developed.