53 resultados para k-Means algorithm
Resumo:
Boreal winter wind storm situations over Central Europe are investigated by means of an objective cluster analysis. Surface data from the NCEP-Reanalysis and ECHAM4/OPYC3-climate change GHG simulation (IS92a) are considered. To achieve an optimum separation of clusters of extreme storm conditions, 55 clusters of weather patterns are differentiated. To reduce the computational effort, a PCA is initially performed, leading to a data reduction of about 98 %. The clustering itself was computed on 3-day periods constructed with the first six PCs using "k-means" clustering algorithm. The applied method enables an evaluation of the time evolution of the synoptic developments. The climate change signal is constructed by a projection of the GCM simulation on the EOFs attained from the NCEP-Reanalysis. Consequently, the same clusters are obtained and frequency distributions can be compared. For Central Europe, four primary storm clusters are identified. These clusters feature almost 72 % of the historical extreme storms events and add only to 5 % of the total relative frequency. Moreover, they show a statistically significant signature in the associated wind fields over Europe. An increased frequency of Central European storm clusters is detected with enhanced GHG conditions, associated with an enhancement of the pressure gradient over Central Europe. Consequently, more intense wind events over Central Europe are expected. The presented algorithm will be highly valuable for the analysis of huge data amounts as is required for e.g. multi-model ensemble analysis, particularly because of the enormous data reduction.
Resumo:
Background: The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. New method: We propose a complete pipeline for the cluster analysis of ERP data. To increase the signalto-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA)to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). Results: After validating the pipeline on simulated data, we tested it on data from two experiments – a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership.
Resumo:
The k-means cluster technique is used to examine 43 yr of daily winter Northern Hemisphere (NH) polar stratospheric data from the 40-yr ECMWF Re-Analysis (ERA-40). The results show that the NH winter stratosphere exists in two natural well-separated states. In total, 10% of the analyzed days exhibit a warm disturbed state that is typical of sudden stratospheric warming events. The remaining 90% of the days are in a state typical of a colder undisturbed vortex. These states are determined objectively, with no preconceived notion of the groups. The two stratospheric states are described and compared with alternative indicators of the polar winter flow, such as the northern annular mode. It is shown that the zonally averaged zonal winds in the polar upper stratosphere at 7 hPa can best distinguish between the two states, using a threshold value of 4 m s−1, which is remarkably close to the standard WMO criterion for major warming events. The analysis also determines that there are no further divisions within the warm state, indicating that there is no well-designated threshold between major and minor warmings, nor between split and displaced vortex events. These different manifestations are simply members of a continuum of warming events.
Resumo:
Extratropical transition (ET) has eluded objective identification since the realisation of its existence in the 1970s. Recent advances in numerical, computational models have provided data of higher resolution than previously available. In conjunction with this, an objective characterisation of the structure of a storm has now become widely accepted in the literature. Here we present a method of combining these two advances to provide an objective method for defining ET. The approach involves applying K-means clustering to isolate different life-cycle stages of cyclones and then analysing the progression through these stages. This methodology is then tested by applying it to five recent years from the European Centre of Medium-Range Weather Forecasting operational analyses. It is found that this method is able to determine the general characteristics for ET in the Northern Hemisphere. Between 2008 and 2012, 54% (±7, 32 of 59) of Northern Hemisphere tropical storms are estimated to undergo ET. There is great variability across basins and time of year. To fully capture all the instances of ET is necessary to introduce and characterise multiple pathways through transition. Only one of the three transition types needed has been previously well-studied. A brief description of the alternate types of transitions is given, along with illustrative storms, to assist with further study
Resumo:
Precipitation over western Europe (WE) is projected to increase (decrease) roughly northward (equatorward) of 50°N during the 21st century. These changes are generally attributed to alterations in the regional large-scale circulation, e.g., jet stream, cyclone activity, and blocking frequencies. A novel weather typing within the sector (30°W–10°E, 25–70°N) is used for a more comprehensive dynamical interpretation of precipitation changes. A k-means clustering on daily mean sea level pressure was undertaken for ERA-Interim reanalysis (1979–2014). Eight weather types are identified: S1, S2, S3 (summertime types), W1, W2, W3 (wintertime types), B1, and B2 (blocking-like types). Their distinctive dynamical characteristics allow identifying the main large-scale precipitation-driving mechanisms. Simulations with 22 Coupled Model Intercomparison Project 5 models for recent climate conditions show biases in reproducing the observed seasonality of weather types. In particular, an overestimation of weather type frequencies associated with zonal airflow is identified. Considering projections following the (Representative Concentration Pathways) RCP8.5 scenario over 2071–2100, the frequencies of the three driest types (S1, B2, and W3) are projected to increase (mainly S1, +4%) in detriment of the rainiest types, particularly W1 (−3%). These changes explain most of the precipitation projections over WE. However, a weather type-independent background signal is identified (increase/decrease in precipitation over northern/southern WE), suggesting modifications in precipitation-generating processes and/or model inability to accurately simulate these processes. Despite these caveats in the precipitation scenarios for WE, which must be duly taken into account, our approach permits a better understanding of the projected trends for precipitation over WE.
Resumo:
An improved algorithm for the generation of gridded window brightness temperatures is presented. The primary data source is the International Satellite Cloud Climatology Project, level B3 data, covering the period from July 1983 to the present. The algorithm rakes window brightness, temperatures from multiple satellites, both geostationary and polar orbiting, which have already been navigated and normalized radiometrically to the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer, and generates 3-hourly global images on a 0.5 degrees by 0.5 degrees latitude-longitude grid. The gridding uses a hierarchical scheme based on spherical kernel estimators. As part of the gridding procedure, the geostationary data are corrected for limb effects using a simple empirical correction to the radiances, from which the corrected temperatures are computed. This is in addition to the application of satellite zenith angle weighting to downweight limb pixels in preference to nearer-nadir pixels. The polar orbiter data are windowed on the target time with temporal weighting to account for the noncontemporaneous nature of the data. Large regions of missing data are interpolated from adjacent processed images using a form of motion compensated interpolation based on the estimation of motion vectors using an hierarchical block matching scheme. Examples are shown of the various stages in the process. Also shown are examples of the usefulness of this type of data in GCM validation.
Resumo:
The North Pacific and Bering Sea regions represent loci of cyclogenesis and storm track activity. In this paper climatological properties of extratropical storms in the North Pacific/Bering Sea are presented based upon aggregate statistics of individual storm tracks calculated by means of a feature-tracking algorithm run using NCEP–NCAR reanalysis data from 1948/49 to 2008, provided by the NOAA/Earth System Research Laboratory and the Cooperative Institute for Research in Environmental Sciences, Climate Diagnostics Center. Storm identification is based on the 850-hPa relative vorticity field (ζ) instead of the often-used mean sea level pressure; ζ is a prognostic field, a good indicator of synoptic-scale dynamics, and is directly related to the wind speed. Emphasis extends beyond winter to provide detailed consideration of all seasons. Results show that the interseasonal variability is not as large during the spring and autumn seasons. Most of the storm variables—genesis, intensity, track density—exhibited a maxima pattern that was oriented along a zonal axis. From season to season this axis underwent a north–south shift and, in some cases, a rotation to the northeast. This was determined to be a result of zonal heating variations and midtropospheric moisture patterns. Barotropic processes have an influence in shaping the downstream end of storm tracks and, together with the blocking influence of the coastal orography of northwest North America, result in high lysis concentrations, effectively making the Gulf of Alaska the “graveyard” of Pacific storms. Summer storms tended to be longest in duration. Temporal trends tended to be weak over the study area. SST did not emerge as a major cyclogenesis control in the Gulf of Alaska.
Resumo:
We advocate the use of systolic design techniques to create custom hardware for Custom Computing Machines. We have developed a hardware genetic algorithm based on systolic arrays to illustrate the feasibility of the approach. The architecture is independent of the lengths of chromosomes used and can be scaled in size to accommodate different population sizes. An FPGA prototype design can process 16 million genes per second.
Resumo:
The measurement of the impact of technical change has received significant attention within the economics literature. One popular method of quantifying the impact of technical change is the use of growth accounting index numbers. However, in a recent article Nelson and Pack (1999) criticise the use of such index numbers in situations where technical change is likely to be biased in favour of one or other inputs. In particular they criticise the common approach of applying observed cost shares, as proxies for partial output elasticities, to weight the change in quantities which they claim is only valid under Hicks neutrality. Recent advances in the measurement of product and factor biases of technical change developed by Balcombe et al (2000) provide a relatively straight-forward means of correcting product and factor shares in the face of biased technical progress. This paper demonstrates the correction of both revenue and cost shares used in the construction of a TFP index for UK agriculture over the period 1953 to 2000 using both revenue and cost function share equations appended with stochastic latent variables to capture the bias effect. Technical progress is shown to be biased between both individual input and output groups. Output and input quantity aggregates are then constructed using both observed and corrected share weights and the resulting TFPs are compared. There does appear to be some significant bias in TFP if the effect of biased technical progress is not taken into account when constructing the weights
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.