925 resultados para Travel Time Prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Après des décennies de développement, l'ablation laser est devenue une technique importante pour un grand nombre d'applications telles que le dépôt de couches minces, la synthèse de nanoparticules, le micro-usinage, l’analyse chimique, etc. Des études expérimentales ainsi que théoriques ont été menées pour comprendre les mécanismes physiques fondamentaux mis en jeu pendant l'ablation et pour déterminer l’effet de la longueur d'onde, de la durée d'impulsion, de la nature de gaz ambiant et du matériau de la cible. La présente thèse décrit et examine l'importance relative des mécanismes physiques qui influencent les caractéristiques des plasmas d’aluminium induits par laser. Le cadre général de cette recherche forme une étude approfondie de l'interaction entre la dynamique de la plume-plasma et l’atmosphère gazeuse dans laquelle elle se développe. Ceci a été réalisé par imagerie résolue temporellement et spatialement de la plume du plasma en termes d'intensité spectrale, de densité électronique et de température d'excitation dans différentes atmosphères de gaz inertes tel que l’Ar et l’He et réactifs tel que le N2 et ce à des pressions s’étendant de 10‾7 Torr (vide) jusqu’à 760 Torr (pression atmosphérique). Nos résultats montrent que l'intensité d'émission de plasma dépend généralement de la nature de gaz et qu’elle est fortement affectée par sa pression. En outre, pour un délai temporel donné par rapport à l'impulsion laser, la densité électronique ainsi que la température augmentent avec la pression de gaz, ce qui peut être attribué au confinement inertiel du plasma. De plus, on observe que la densité électronique est maximale à proximité de la surface de la cible où le laser est focalisé et qu’elle diminue en s’éloignant (axialement et radialement) de cette position. Malgré la variation axiale importante de la température le long du plasma, on trouve que sa variation radiale est négligeable. La densité électronique et la température ont été trouvées maximales lorsque le gaz est de l’argon et minimales pour l’hélium, tandis que les valeurs sont intermédiaires dans le cas de l’azote. Ceci tient surtout aux propriétés physiques et chimiques du gaz telles que la masse des espèces, leur énergie d'excitation et d'ionisation, la conductivité thermique et la réactivité chimique. L'expansion de la plume du plasma a été étudiée par imagerie résolue spatio-temporellement. Les résultats montrent que la nature de gaz n’affecte pas la dynamique de la plume pour des pressions inférieures à 20 Torr et pour un délai temporel inférieur à 200 ns. Cependant, pour des pressions supérieures à 20 Torr, l'effet de la nature du gaz devient important et la plume la plus courte est obtenue lorsque la masse des espèces du gaz est élevée et lorsque sa conductivité thermique est relativement faible. Ces résultats sont confirmés par la mesure de temps de vol de l’ion Al+ émettant à 281,6 nm. D’autre part, on trouve que la vitesse de propagation des ions d’aluminium est bien définie juste après l’ablation et près de la surface de la cible. Toutefois, pour un délai temporel important, les ions, en traversant la plume, se thermalisent grâce aux collisions avec les espèces du plasma et du gaz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to show the importance of two classification techniques, viz. decision tree and clustering, in prediction of learning disabilities (LD) of school-age children. LDs affect about 10 percent of all children enrolled in schools. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. Decision trees and clustering are powerful and popular tools used for classification and prediction in Data mining. Different rules extracted from the decision tree are used for prediction of learning disabilities. Clustering is the assignment of a set of observations into subsets, called clusters, which are useful in finding the different signs and symptoms (attributes) present in the LD affected child. In this paper, J48 algorithm is used for constructing the decision tree and K-means algorithm is used for creating the clusters. By applying these classification techniques, LD in any child can be identified

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning disability (LD) is a neurological condition that affects a child’s brain and impairs his ability to carry out one or many specific tasks. LD affects about 10% of children enrolled in schools. There is no cure for learning disabilities and they are lifelong. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. Just as there are many different types of LDs, there are a variety of tests that may be done to pinpoint the problem The information gained from an evaluation is crucial for finding out how the parents and the school authorities can provide the best possible learning environment for child. This paper proposes a new approach in artificial neural network (ANN) for identifying LD in children at early stages so as to solve the problems faced by them and to get the benefits to the students, their parents and school authorities. In this study, we propose a closest fit algorithm data preprocessing with ANN classification to handle missing attribute values. This algorithm imputes the missing values in the preprocessing stage. Ignoring of missing attribute values is a common trend in all classifying algorithms. But, in this paper, we use an algorithm in a systematic approach for classification, which gives a satisfactory result in the prediction of LD. It acts as a tool for predicting the LD accurately, and good information of the child is made available to the concerned

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning Disability (LD) is a classification including several disorders in which a child has difficulty in learning in a typical manner, usually caused by an unknown factor or factors. LD affects about 15% of children enrolled in schools. The prediction of learning disability is a complicated task since the identification of LD from diverse features or signs is a complicated problem. There is no cure for learning disabilities and they are life-long. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. The aim of this paper is to develop a new algorithm for imputing missing values and to determine the significance of the missing value imputation method and dimensionality reduction method in the performance of fuzzy and neuro fuzzy classifiers with specific emphasis on prediction of learning disabilities in school age children. In the basic assessment method for prediction of LD, checklists are generally used and the data cases thus collected fully depends on the mood of children and may have also contain redundant as well as missing values. Therefore, in this study, we are proposing a new algorithm, viz. the correlation based new algorithm for imputing the missing values and Principal Component Analysis (PCA) for reducing the irrelevant attributes. After the study, it is found that, the preprocessing methods applied by us improves the quality of data and thereby increases the accuracy of the classifiers. The system is implemented in Math works Software Mat Lab 7.10. The results obtained from this study have illustrated that the developed missing value imputation method is very good contribution in prediction system and is capable of improving the performance of a classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research of this thesis dissertation covers developments and applications of short-and long-term climate predictions. The short-term prediction emphasizes monthly and seasonal climate, i.e. forecasting from up to the next month over a season to up to a year or so. The long-term predictions pertain to the analysis of inter-annual- and decadal climate variations over the whole 21st century. These two climate prediction methods are validated and applied in the study area, namely, Khlong Yai (KY) water basin located in the eastern seaboard of Thailand which is a major industrial zone of the country and which has been suffering from severe drought and water shortage in recent years. Since water resources are essential for the further industrial development in this region, a thorough analysis of the potential climate change with its subsequent impact on the water supply in the area is at the heart of this thesis research. The short-term forecast of the next-season climate, such as temperatures and rainfall, offers a potential general guideline for water management and reservoir operation. To that avail, statistical models based on autoregressive techniques, i.e., AR-, ARIMA- and ARIMAex-, which includes additional external regressors, and multiple linear regression- (MLR) models, are developed and applied in the study region. Teleconnections between ocean states and the local climate are investigated and used as extra external predictors in the ARIMAex- and the MLR-model and shown to enhance the accuracy of the short-term predictions significantly. However, as the ocean state – local climate teleconnective relationships provide only a one- to four-month ahead lead time, the ocean state indices can support only a one-season-ahead forecast. Hence, GCM- climate predictors are also suggested as an additional predictor-set for a more reliable and somewhat longer short-term forecast. For the preparation of “pre-warning” information for up-coming possible future climate change with potential adverse hydrological impacts in the study region, the long-term climate prediction methodology is applied. The latter is based on the downscaling of climate predictions from several single- and multi-domain GCMs, using the two well-known downscaling methods SDSM and LARS-WG and a newly developed MLR-downscaling technique that allows the incorporation of a multitude of monthly or daily climate predictors from one- or several (multi-domain) parent GCMs. The numerous downscaling experiments indicate that the MLR- method is more accurate than SDSM and LARS-WG in predicting the recent past 20th-century (1971-2000) long-term monthly climate in the region. The MLR-model is, consequently, then employed to downscale 21st-century GCM- climate predictions under SRES-scenarios A1B, A2 and B1. However, since the hydrological watershed model requires daily-scale climate input data, a new stochastic daily climate generator is developed to rescale monthly observed or predicted climate series to daily series, while adhering to the statistical and geospatial distributional attributes of observed (past) daily climate series in the calibration phase. Employing this daily climate generator, 30 realizations of future daily climate series from downscaled monthly GCM-climate predictor sets are produced and used as input in the SWAT- distributed watershed model, to simulate future streamflow and other hydrological water budget components in the study region in a multi-realization manner. In addition to a general examination of the future changes of the hydrological regime in the KY-basin, potential future changes of the water budgets of three main reservoirs in the basin are analysed, as these are a major source of water supply in the study region. The results of the long-term 21st-century downscaled climate predictions provide evidence that, compared with the past 20th-reference period, the future climate in the study area will be more extreme, particularly, for SRES A1B. Thus, the temperatures will be higher and exhibit larger fluctuations. Although the future intensity of the rainfall is nearly constant, its spatial distribution across the region is partially changing. There is further evidence that the sequential rainfall occurrence will be decreased, so that short periods of high intensities will be followed by longer dry spells. This change in the sequential rainfall pattern will also lead to seasonal reductions of the streamflow and seasonal changes (decreases) of the water storage in the reservoirs. In any case, these predicted future climate changes with their hydrological impacts should encourage water planner and policy makers to develop adaptation strategies to properly handle the future water supply in this area, following the guidelines suggested in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop an extension to the tactical planning model (TPM) for a job shop by the third author. The TPM is a discrete-time model in which all transitions occur at the start of each time period. The time period must be defined appropriately in order for the model to be meaningful. Each period must be short enough so that a job is unlikely to travel through more than one station in one period. At the same time, the time period needs to be long enough to justify the assumptions of continuous workflow and Markovian job movements. We build an extension to the TPM that overcomes this restriction of period sizing by permitting production control over shorter time intervals. We achieve this by deriving a continuous-time linear control rule for a single station. We then determine the first two moments of the production level and queue length for the workstation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The composition of the labour force is an important economic factor for a country. Often the changes in proportions of different groups are of interest. I this paper we study a monthly compositional time series from the Swedish Labour Force Survey from 1994 to 2005. Three models are studied: the ILR-transformed series, the ILR-transformation of the compositional differenced series of order 1, and the ILRtransformation of the compositional differenced series of order 12. For each of the three models a VAR-model is fitted based on the data 1994-2003. We predict the time series 15 steps ahead and calculate 95 % prediction regions. The predictions of the three models are compared with actual values using MAD and MSE and the prediction regions are compared graphically in a ternary time series plot. We conclude that the first, and simplest, model possesses the best predictive power of the three models

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To establish a prediction model of the degree of disability in adults with Spinal CordInjury (SCI ) based on the use of the WHO-DAS II . Methods: The disability degree was correlatedwith three variable groups: clinical, sociodemographic and those related with rehabilitation services.A model of multiple linear regression was built to predict disability. 45 people with sci exhibitingdiverse etiology, neurological level and completeness participated. Patients were older than 18 andthey had more than a six-month post-injury. The WHO-DAS II and the ASIA impairment scale(AIS ) were used. Results: Variables that evidenced a significant relationship with disability were thefollowing: occupational situation, type of affiliation to the public health care system, injury evolutiontime, neurological level, partial preservation zone, ais motor and sensory scores and number ofclinical complications during the last year. Complications significantly associated to disability werejoint pain, urinary infections, intestinal problems and autonomic disreflexia. None of the variablesrelated to rehabilitation services showed significant association with disability. The disability degreeexhibited significant differences in favor of the groups that received the following services: assistivedevices supply and vocational, job or educational counseling. Conclusions: The best predictiondisability model in adults with sci with more than six months post-injury was built with variablesof injury evolution time, AIS sensory score and injury-related unemployment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La globalización y la competitividad como realidad de las empresas, implica que los gerentes preparen a sus empresas de la mejor manera para sobrevivir en este mundo tan inestable y cambiante. El primer paso consta de investigar y medir como se encuentra la empresa en cada uno de sus componentes, tales como recurso humano, mercadeo, logística, operación y por último y más importante las finanzas. El conocimiento de salud financiera y de los riesgos asociados a la actividad de las empresas, les permitirá a los gerentes tomar las decisiones correctas para ser rentables y perdurables en el mundo de los negocios inmerso en la globalización y competitividad. Esta apreciación es pertinente en Avianca S.A. esto teniendo en cuenta su progreso y evolución desde su primer vuelo el 5 de diciembre de 1919 comercial, hasta hoy cuando cotiza en la bolsa de Nueva York. Se realizó un análisis de tipo descriptivo, acompañado de la aplicación de ratios y nomenclaturas, dando lugar a establecer la salud financiera y los riesgos, no solo de Avianca sino también del sector aeronáutico. Como resultado se obtuvo que el sector aeronáutico sea financieramente saludable en el corto plazo, pero en el largo plazo su salud financiera se ve comprometida por los riegos asociados al sector y a la actividad desarrollada.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis I propose a novel method to estimate the dose and injection-to-meal time for low-risk intensive insulin therapy. This dosage-aid system uses an optimization algorithm to determine the insulin dose and injection-to-meal time that minimizes the risk of postprandial hyper- and hypoglycaemia in type 1 diabetic patients. To this end, the algorithm applies a methodology that quantifies the risk of experiencing different grades of hypo- or hyperglycaemia in the postprandial state induced by insulin therapy according to an individual patient’s parameters. This methodology is based on modal interval analysis (MIA). Applying MIA, the postprandial glucose level is predicted with consideration of intra-patient variability and other sources of uncertainty. A worst-case approach is then used to calculate the risk index. In this way, a safer prediction of possible hyper- and hypoglycaemic episodes induced by the insulin therapy tested can be calculated in terms of these uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for assessing forecast skill and predictability that involves the identification and tracking of extratropical cyclones has been developed and implemented to obtain detailed information about the prediction of cyclones that cannot be obtained from more conventional analysis methodologies. The cyclones were identified and tracked along the forecast trajectories, and statistics were generated to determine the rate at which the position and intensity of the forecasted storms diverge from the analyzed tracks as a function of forecast lead time. The results show a higher level of skill in predicting the position of extratropical cyclones than the intensity. They also show that there is potential to improve the skill in predicting the position by 1 - 1.5 days and the intensity by 2 - 3 days, via improvements to the forecast model. Further analysis shows that forecasted storms move at a slower speed than analyzed storms on average and that there is a larger error in the predicted amplitudes of intense storms than the weaker storms. The results also show that some storms can be predicted up to 3 days before they are identified as an 850-hPa vorticity center in the analyses. In general, the results show a higher level of skill in the Northern Hemisphere (NH) than the Southern Hemisphere (SH); however, the rapid growth of NH winter storms is not very well predicted. The impact that observations of different types have on the prediction of the extratropical cyclones has also been explored, using forecasts integrated from analyses that were constructed from reduced observing systems. A terrestrial, satellite, and surface-based system were investigated and the results showed that the predictive skill of the terrestrial system was superior to the satellite system in the NH. Further analysis showed that the satellite system was not very good at predicting the growth of the storms. In the SH the terrestrial system has significantly less skill than the satellite system, highlighting the dominance of satellite observations in this hemisphere. The surface system has very poor predictive skill in both hemispheres.