197 resultados para Statistical peak moments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

More evenly spread demand for public transport throughout a day can reduce transit service provider‟s total asset and labour costs. A plausible peak spreading strategy is to increase peak fare and/or to reduce off-peak fare. This paper reviews relevant empirical studies for urban rail systems, as rail transit plays a key role in Australian urban passenger transport and experiences severe peak loading variability. The literature is categorised into four groups: a) passenger opinions on willingness to change time for travel, b) valuations of displacement time using stated preference technique, c) simulations of peak spreading based on trip scheduling models, and: d) real-world cases of peak spreading using differential fare. Policy prescription is advised to take into account impacts of traveller‟s time flexibility and joint effects of mode shifting and peak spreading. Although focusing on urban rail, arguments in this paper are relevant to public transport in general with values to researchers and practitioners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical methodology was applied to a survey of time-course incidence of four viruses (alfalfa mosaic virus, clover yellow vein virus, subterranean clover mottle virus and subterranean clover red leaf virus) in improved pastures in southern regions of Australia. -from Authors

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity network investment and asset management require accurate estimation of future demand in energy consumption within specified service areas. For this purpose, simple models are typically developed to predict future trends in electricity consumption using various methods and assumptions. This paper presents a statistical model to predict electricity consumption in the residential sector at the Census Collection District (CCD) level over the state of New South Wales, Australia, based on spatial building and household characteristics. Residential household demographic and building data from the Australian Bureau of Statistics (ABS) and actual electricity consumption data from electricity companies are merged for 74 % of the 12,000 CCDs in the state. Eighty percent of the merged dataset is randomly set aside to establish the model using regression analysis, and the remaining 20 % is used to independently test the accuracy of model prediction against actual consumption. In 90 % of the cases, the predicted consumption is shown to be within 5 kWh per dwelling per day from actual values, with an overall state accuracy of -1.15 %. Given a future scenario with a shift in climate zone and a growth in population, the model is used to identify the geographical or service areas that are most likely to have increased electricity consumption. Such geographical representation can be of great benefit when assessing alternatives to the centralised generation of energy; having such a model gives a quantifiable method to selecting the 'most' appropriate system when a review or upgrade of the network infrastructure is required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pilot experiment was performed using the WOMBAT powder diffraction instrument at ANSTO in which the first neutron diffraction peak (Q0) was measured for D2O flowing in a 2 mm internal diameter aluminium tube. Measurements of Q0 were made at -9, 4.3, 6.9, 12, 18.2 and 21.5 °C. The D2O was circulated using a siphon with water in the lower reservoir returned to the upper reservoir using a small pump. This enabled stable flow to be maintained for several hours. For example, if the pump flow increased slightly, the upper reservoir level rose, increasing the siphon flow until it matched the return flow. A neutron wavelength of 2.4 Å was used and data integrated over 60 minutes for each temperature. A jet of nitrogen from a liquid N2 Dewar was directed over the aluminium tube to vary water temperature. After collection of the data, the d spacing of the aluminium peaks was used to calculate the temperature of the aluminium within the neutron beam and therefore was considered to be an accurate measure of water temperature within the beam. Sigmaplot version 12.3 was used to fit a Weibull five parameter peak fit to the first neutron diffraction peak. The values of Q0 obtained in this experiment showed an increase with temperature consistent with data in the literature [1] but were consistently higher than published values for bulk D20. For example at 21.5 °C we obtained a value of 2.008 Å-1 for Q0 compared to a literature value of 1.988 Å-1 for bulk D2O at 20 °C, a difference of 1%. Further experiments are required to see if this difference is real or artifactual.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

X-ray diffraction structure functions for water flowing in a 1.5 mm diameter siphon in the temperature range 4 – 63 °C were obtained using a 20 keV beam at the Australian Synchrotron. These functions were compared with structure functions obtained at the Advanced Light Source for a 0.5 mm thick sample of water in the temperature range 1 – 77 °C irradiated with an 11 keV beam. The two sets of structure functions are similar, but there are subtle differences in the shape and relative position of the two functions suggesting a possible differences between the structure of bulk and siphon water. In addition, the first structural peak (Q0) for water in a siphon, showed evidence of a step-wise increase in Q0 with increasing temperature rather than a smoothly varying increase. More experiments are required to investigate this apparent difference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter addresses data modelling as a means of promoting statistical literacy in the early grades. Consideration is first given to the importance of increasing young children’s exposure to statistical reasoning experiences and how data modelling can be a rich means of doing so. Selected components of data modelling are then reviewed, followed by a report on some findings from the third-year of a three-year longitudinal study across grades one through three.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper provides a contextual reflection for understanding best practice teaching to first year design students. The outcome (job) focussed approach to higher education has lead to some unanticipated collateral damage for students, and in the case we discuss, has altered the students’ expectations of course delivery with specific implications and challenges for design educators. This tendency in educational delivery systems is further compounded by the distinct characteristics of Generation Y students within a classroom context. It is our belief that foundational design education must focus more on process than outcomes, and through this research with first year design students we analyse and raise questions relative to the curriculum for a Design and Creative Thinking course—in which students not only benefit from learning the theories and processes of design thinking, conceptualisation and creativity, but also are encouraged to see it as an essential tool for their education and development as designers. This study considers the challenges within a design environment; specifically, we address the need for process based learning in contrast to the outcome-focused approach taken by most students. With this approach, students simultaneously learn to be a designer and rethink their approach to “doing design”.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cotton strip assay (CSA) is an established technique for measuring soil microbial activity. The technique involves burying cotton strips and measuring their tensile strength after a certain time. This gives a measure of the rotting rate, R, of the cotton strips. R is then a measure of soil microbial activity. This paper examines properties of the technique and indicates how the assay can be optimised. Humidity conditioning of the cotton strips before measuring their tensile strength reduced the within and between day variance and enabled the distribution of the tensile strength measurements to approximate normality. The test data came from a three-way factorial experiment (two soils, two temperatures, three moisture levels). The cotton strips were buried in the soil for intervals of time ranging up to 6 weeks. This enabled the rate of loss of cotton tensile strength with time to be studied under a range of conditions. An inverse cubic model accounted for greater than 90% of the total variation within each treatment combination. This offers support for summarising the decomposition process by a single parameter R. The approximate variance of the decomposition rate was estimated from a function incorporating the variance of tensile strength and the differential of the function for the rate of decomposition, R, with respect to tensile strength. This variance function has a minimum when the measured strength is approximately 2/3 that of the original strength. The estimates of R are almost unbiased and relatively robust against the cotton strips being left in the soil for more or less than the optimal time. We conclude that the rotting rate X should be measured using the inverse cubic equation, and that the cotton strips should be left in the soil until their strength has been reduced to about 2/3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A catchment-scale multivariate statistical analysis of hydrochemistry enabled assessment of interactions between alluvial groundwater and Cressbrook Creek, an intermittent drainage system in southeast Queensland, Australia. Hierarchical cluster analyses and principal component analysis were applied to time-series data to evaluate the hydrochemical evolution of groundwater during periods of extreme drought and severe flooding. A simple three-dimensional geological model was developed to conceptualise the catchment morphology and the stratigraphic framework of the alluvium. The alluvium forms a two-layer system with a basal coarse-grained layer overlain by a clay-rich low-permeability unit. In the upper and middle catchment, alluvial groundwater is chemically similar to streamwater, particularly near the creek (reflected by high HCO3/Cl and K/Na ratios and low salinities), indicating a high degree of connectivity. In the lower catchment, groundwater is more saline with lower HCO3/Cl and K/Na ratios, notably during dry periods. Groundwater salinity substantially decreased following severe flooding in 2011, notably in the lower catchment, confirming that flooding is an important mechanism for both recharge and maintaining groundwater quality. The integrated approach used in this study enabled effective interpretation of hydrological processes and can be applied to a variety of hydrological settings to synthesise and evaluate large hydrochemical datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION Calculating segmental (vertebral level-by-level) torso masses in Adolescent Idiopathic Scoliosis (AIS) patients allows the gravitational loading on the scoliotic spine during relaxed standing to be estimated. METHODS Existing low dose CT scans were used to calculate vertebral level-by-level torso masses and joint moments occurring in the spine for a group of female AIS patients with right-sided thoracic curves. Image processing software, ImageJ (v1.45 NIH USA) was used to reconstruct the torso segments and subsequently measure the torso volume and mass corresponding to each vertebral level. Body segment masses for the head, neck and arms were taken from published anthropometric data. Intervertebral joint moments at each vertebral level were found by summing each of the torso segment masses above the required joint and multiplying it by the perpendicular distance to the centre of the disc. RESULTS AND DISCUSSION Twenty patients were included in this study with a mean age of 15.0±2.7 years and a mean Cobb angle 52±5.9°. The mean total trunk mass, as a percentage of total body mass, was 27.8 (SD 0.5) %. Mean segmental torso mass increased inferiorly from 0.6kg at T1 to 1.5kg at L5. The coronal plane joint moments during relaxed standing were typically 5-7Nm at the apex of the curve (Figure 1), with the highest apex joint of 7Nm. CT scans were performed in the supine position and curve magnitudes are known to be 7-10° smaller than those measured in standing [1]. Therefore joint moments produced by gravity will be greater than those calculated here. CONCLUSIONS Coronal plane joint moments as high as 7Nm can occur during relaxed standing in scoliosis patients, which may help to explain the mechanics of AIS progression. The body mass distributions calculated in this study can be used to estimate joint moments derived using other imaging modalities such as MRI and subsequently determine if a relationship exists between joint moments and progressive vertebral deformity.