939 resultados para Operational speed
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
Prediction of the solar wind conditions in near-Earth space, arising from both quasi-steady and transient structures, is essential for space weather forecasting. To achieve forecast lead times of a day or more, such predictions must be made on the basis of remote solar observations. A number of empirical prediction schemes have been proposed to forecast the transit time and speed of coronal mass ejections (CMEs) at 1 AU. However, the current lack of magnetic field measurements in the corona severely limits our ability to forecast the 1 AU magnetic field strengths resulting from interplanetary CMEs (ICMEs). In this study we investigate the relation between the characteristic magnetic field strengths and speeds of both magnetic cloud and noncloud ICMEs at 1 AU. Correlation between field and speed is found to be significant only in the sheath region ahead of magnetic clouds, not within the clouds themselves. The lack of such a relation in the sheaths ahead of noncloud ICMEs is consistent with such ICMEs being skimming encounters of magnetic clouds, though other explanations are also put forward. Linear fits to the radial speed profiles of ejecta reveal that faster-traveling ICMEs are also expanding more at 1 AU. We combine these empirical relations to form a prediction scheme for the magnetic field strength in the sheaths ahead of magnetic clouds and also suggest a method for predicting the radial speed profile through an ICME on the basis of upstream measurements.
Resumo:
The difference between cirrus emissivities at 8 and 11 μm is sensitive to the mean effective ice crystal size of the cirrus cloud, De. By using single scattering properties of ice crystals shaped as planar polycrystals, diameters of up to about 70 μm can be retrieved, instead of up to 45 μm assuming spheres or hexagonal columns. The method described in this article is used for a global determination of mean effective ice crystal sizes of cirrus clouds from TOVS satellite observations. A sensitivity study of the De retrieval to uncertainties in hypotheses on ice crystal shape, size distributions, and temperature profiles, as well as in vertical and horizontal cloud heterogeneities shows that uncertainties can be as large as 30%. However, the TOVS data set is one of few data sets which provides global and long-term coverage. Having analyzed the years 1987–1991, it was found that measured effective ice crystal diameters De are stable from year to year. For 1990 a global median De of 53.5 μm was determined. Averages distinguishing ocean/land, season, and latitude lie between 23 μm in winter over Northern Hemisphere midlatitude land and 64 μm in the tropics. In general, larger Des are found in regions with higher atmospheric water vapor and for cirrus with a smaller effective emissivity.
Resumo:
During the past 15 years, a number of initiatives have been undertaken at national level to develop ocean forecasting systems operating at regional and/or global scales. The co-ordination between these efforts has been organized internationally through the Global Ocean Data Assimilation Experiment (GODAE). The French MERCATOR project is one of the leading participants in GODAE. The MERCATOR systems routinely assimilate a variety of observations such as multi-satellite altimeter data, sea-surface temperature and in situ temperature and salinity profiles, focusing on high-resolution scales of the ocean dynamics. The assimilation strategy in MERCATOR is based on a hierarchy of methods of increasing sophistication including optimal interpolation, Kalman filtering and variational methods, which are progressively deployed through the Syst`eme d’Assimilation MERCATOR (SAM) series. SAM-1 is based on a reduced-order optimal interpolation which can be operated using ‘altimetry-only’ or ‘multi-data’ set-ups; it relies on the concept of separability, assuming that the correlations can be separated into a product of horizontal and vertical contributions. The second release, SAM-2, is being developed to include new features from the singular evolutive extended Kalman (SEEK) filter, such as three-dimensional, multivariate error modes and adaptivity schemes. The third one, SAM-3, considers variational methods such as the incremental four-dimensional variational algorithm. Most operational forecasting systems evaluated during GODAE are based on least-squares statistical estimation assuming Gaussian errors. In the framework of the EU MERSEA (Marine EnviRonment and Security for the European Area) project, research is being conducted to prepare the next-generation operational ocean monitoring and forecasting systems. The research effort will explore nonlinear assimilation formulations to overcome limitations of the current systems. This paper provides an overview of the developments conducted in MERSEA with the SEEK filter, the Ensemble Kalman filter and the sequential importance re-sampling filter.
Resumo:
The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) is a World Weather Research Programme project. One of its main objectives is to enhance collaboration on the development of ensemble prediction between operational centers and universities by increasing the availability of ensemble prediction system (EPS) data for research. This study analyzes the prediction of Northern Hemisphere extratropical cyclones by nine different EPSs archived as part of the TIGGE project for the 6-month time period of 1 February 2008–31 July 2008, which included a sample of 774 cyclones. An objective feature tracking method has been used to identify and track the cyclones along the forecast trajectories. Forecast verification statistics have then been produced [using the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analysis as the truth] for cyclone position, intensity, and propagation speed, showing large differences between the different EPSs. The results show that the ECMWF ensemble mean and control have the highest level of skill for all cyclone properties. The Japanese Meteorological Administration (JMA), the National Centers for Environmental Prediction (NCEP), the Met Office (UKMO), and the Canadian Meteorological Centre (CMC) have 1 day less skill for the position of cyclones throughout the forecast range. The relative performance of the different EPSs remains the same for cyclone intensity except for NCEP, which has larger errors than for position. NCEP, the Centro de Previsão de Tempo e Estudos Climáticos (CPTEC), and the Australian Bureau of Meteorology (BoM) all have faster intensity error growth in the earlier part of the forecast. They are also very underdispersive and significantly underpredict intensities, perhaps due to the comparatively low spatial resolutions of these EPSs not being able to accurately model the tilted structure essential to cyclone growth and decay. There is very little difference between the levels of skill of the ensemble mean and control for cyclone position, but the ensemble mean provides an advantage over the control for all EPSs except CPTEC in cyclone intensity and there is an advantage for propagation speed for all EPSs. ECMWF and JMA have an excellent spread–skill relationship for cyclone position. The EPSs are all much more underdispersive for cyclone intensity and propagation speed than for position, with ECMWF and CMC performing best for intensity and CMC performing best for propagation speed. ECMWF is the only EPS to consistently overpredict cyclone intensity, although the bias is small. BoM, NCEP, UKMO, and CPTEC significantly underpredict intensity and, interestingly, all the EPSs underpredict the propagation speed, that is, the cyclones move too slowly on average in all EPSs.
Resumo:
Building energy consumption(BEC) accounting and assessment is fundamental work for building energy efficiency(BEE) development. In existing Chinese statistical yearbook, there is no specific item for BEC accounting and relevant data are separated and mixed with other industry consumption. Approximate BEC data can be acquired from existing energy statistical yearbook. For BEC assessment, caloric values of different energy carriers are adopted in energy accounting and assessment field. This methodology obtained much useful conclusion for energy efficiency development. While the traditional methodology concerns only on the energy quantity, energy classification issue is omitted. Exergy methodology is put forward to assess BEC. With the new methodology, energy quantity and quality issues are both concerned in BEC assessment. To illustrate the BEC accounting and exergy assessment, a case of Chongqing in 2004 is shown. Based on the exergy analysis, BEC of Chongqing in 2004 accounts for 17.3% of the total energy consumption. This result is quite common to that of traditional methodology. As far as energy supply efficiency is concerned, the difference is highlighted by 0.417 of the exergy methodology to 0.645 of the traditional methodology.
Resumo:
In the reliability literature, maintenance time is usually ignored during the optimization of maintenance policies. In some scenarios, costs due to system failures may vary with time, and the ignorance of maintenance time will lead to unrealistic results. This paper develops maintenance policies for such situations where the system under study operates iteratively at two successive states: up or down. The costs due to system failure at the up state consist of both business losses & maintenance costs, whereas those at the down state only include maintenance costs. We consider three models: Model A, B, and C: Model A makes only corrective maintenance (CM). Model B performs imperfect preventive maintenance (PM) sequentially, and CM. Model C executes PM periodically, and CM; this PM can restore the system as good as the state just after the latest CM. The CM in this paper is imperfect repair. Finally, the impact of these maintenance policies is illustrated through numerical examples.
Resumo:
Priming effects of cooperation vs. individualism were investigated on changeover speed within a 4 x 100-m relay race. Ten teams of four adult beginner athletes ran two relays, a pretest race and an experimental race 3 weeks later. Just before the experimental race, athletes were primed with either cooperation or individualism through a scrambled-sentence task. Comparing to the pretest performance, cooperation priming improved baton speed in the exchange zone (+30 cm/s). Individualism priming did not impair changeover performance. The boundary conditions of priming effects applied to collective and interdependent tasks are discussed within the implicit coordination framework.
Resumo:
This paper deals with the key issues encountered in testing during the development of high-speed networking hardware systems by documenting a practical method for "real-life like" testing. The proposed method is empowered by modern and commonly available Field Programmable Gate Array (FPGA) technology. Innovative application of standard FPGA blocks in combination with reconfigurability are used as a back-bone of the method. A detailed elaboration of the method is given so as to serve as a general reference. The method is fully characterised and compared to alternatives through a case study proving it to be the most efficient and effective one at a reasonable cost.