64 resultados para NCHS data brief (Series)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notwithstanding the problems with identifying audiences (c.f. Hartley, 1987), nor with sampling them (c.f. Turner, 2005), we contend that by using social media, it is at least possible to gain an understanding of the habits of those who chose to engage with content through social media. In this chapter, we will broadly outline the ways in which networks such as Twitter and Facebook can stand as proxies for audiences in a number of scenarios, and enable content creators, networks and researchers to understand the ways in which audiences come into existence, change over time, and engage with content. Beginning with the classic audience television we will consider the evolution of metrics from baseline volume metrics to the more sophisticated telemetrics that are the focus of our current work. We discuss the evolution of these metrics, from principles developed in the field of sabermetrics, and highlight their effectiveness as both a predictor and a baseline for producers and networks to measure the success of their social media campaigns. Moving beyond the evaluation of the audiences engagement, we then move to consider the audiences themselves. Building on Hartleys argument that audiences are imagined constructs (1987, p. 125), we demonstrate the continual shift of Australian television audiences, from episode to episode and series to series, demonstrating through our map of the Australian Twittersphere (Bruns, Burgess & Highfield, 2014) both the variation amongst those who directly engage with television content, and those who are exposed to it through their social media networks. Finally, by exploring overlaps between sporting events (such as the NRL and AFL Grand Finals), reality TV (such as Big Brother, My Kitchen Rules & Biggest Loser), soaps (e.g. Bold & The Beautiful, Home & Away), and current affairs programming (e.g. Morning Television & A Current Affair), we discuss to what extent it is possible to profile and categorize Australian television audiences. Finally, we move beyond television audiences to consider audiences around social media platforms themselves. Building on our map of the Australian Twittersphere (Bruns, Burgess & Highfield, 2014), and a pool of 5000 active Australian accounts, we discuss the interconnectedness of audiences around particular subjects, and how specific topics spread throughout the Twitter Userbase. Also, by using Twitter as a proxy, we consider the career of a number of popular YouTubers, utilizing a method we refer to as Twitter Accession charts (Bruns & Woodford, 2014) to identify the growth curves, and relate them to specific events in the YouTubers career, be that viral videos or collaborations, to discuss how audiences form around specific content creators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new test of hypothesis for classifying stationary time series based on the bias-adjusted estimators of the fitted autoregressive model is proposed. It is shown theoretically that the proposed test has desirable properties. Simulation results show that when time series are short, the size and power estimates of the proposed test are reasonably good, and thus this test is reliable in discriminating between short-length time series. As the length of the time series increases, the performance of the proposed test improves, but the benefit of bias-adjustment reduces. The proposed hypothesis test is applied to two real data sets: the annual real GDP per capita of six European countries, and quarterly real GDP per capita of five European countries. The application results demonstrate that the proposed test displays reasonably good performance in classifying relatively short time series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.