109 resultados para Mean deviation

em Deakin Research Online - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE. To investigate the on-road driving performance of patients with glaucoma.

METHODS. The sample comprised 20 patients with glaucoma and 20 subjects with normal vision, all licensed drivers, matched for age and sex. Driving performance was tested over a 10-km route incorporating 55 standardized maneuvers and skills through residential and business districts of Halifax, Nova Scotia, Canada. Testing was conducted by a professional driving instructor and assessed by an occupational therapist certified in driver rehabilitation, masked to participant group membership and level of vision. Main outcome measures were total number of satisfactory maneuvers and skills, overall rating, and incidence of at-fault critical interventions (application of the dual brake and/or steering override by the driving instructor to prevent a potentially unsafe maneuver). Measures of visual function included visual acuity, contrast sensitivity, and visual fields (Humphrey Field Analyzer; Carl Zeiss Meditec, Inc., Dublin, CA; mean deviation [MD] and binocular Esterman points).

RESULTS. There was no significant difference between patients with glaucoma (mean MD = −1.7 dB [SD 2.2] and −6.5 dB [SD 4.9], better and worse eyes, respectively) and control subjects in total satisfactory maneuvers and skills (P = 0.65), or overall rating (P = 0.60). However, 12 (60%) patients with glaucoma had one or more at-fault critical interventions, compared with 4 (20%) control subjects (odds ratio = 6.00, 95% CI, 1.46–24.69; higher still after adjustment for age, sex, medications and driving exposure), the predominant reason being failure to see and yield to a pedestrian. In the glaucoma group, worse-eye MD was associated with the overall rating of driving (r = 0.66, P = 0.002).

CONCLUSIONS. This sample of patients with glaucoma with slight to moderate visual field impairment performed many real-world driving maneuvers safely. However, they were six times as likely as subjects with normal vision to have a driving instructor intervene for reasons suggesting difficulty with detection of peripheral obstacles and hazards and reaction to unexpected events.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE. To investigate the risk of falls and motor vehicle collisions (MVCs) in patients with glaucoma.

METHODS. The sample comprised 48 patients with glaucoma (mean visual field mean deviation [MD] in the better eye = −3.9 dB; 5.1 dB SD) and 47 age-matched normal control subjects, who were recruited from a university-based hospital eye care clinic and are enrolled in an ongoing prospective study of risk factors for falls, risk factors for MVCs, and on-road driving performance in glaucoma. Main outcome measures at baseline were previous self-reported falls and MVCs, and police-reported MVCs. Demographic and medical data were obtained. In addition, functional independence in daily living, physical activity level and balance were assessed. Clinical vision measures included visual acuity, contrast sensitivity, standard automated perimetry, useful field of view (UFOV), and stereopsis. Analyses of falls and MVCs were adjusted to account for the possible confounding effects of demographic characteristics, medications, and visual field impairment. MVC analyses were also adjusted for kilometers driven per week.

RESULTS. There were no significant differences between patients with glaucoma and control subjects with respect to number of systemic medical conditions, body mass index, functional independence, and physical activity level (P > 0.10). At baseline, 40 (83%) patients with glaucoma and 44 (94%) control subjects were driving. Compared with control subjects, patients with glaucoma were over three times more likely to have fallen in the previous year (odds ratio [OR]adjusted = 3.71; 95% CI, 1.14–12.05), over six times more likely to have been involved in one or more MVCs in the previous 5 years (ORadjusted = 6.62; 95% CI, 1.40–31.23), and more likely to have been at fault (ORadjusted = 12.44; 95% CI, 1.08–143.99). The strongest risk factor for MVCs in patients with glaucoma was impaired UFOV selective attention (ORadjusted = 10.29; 95% CI, 1.10–96.62; for selective attention >350 ms compared with ≤350 ms).

CONCLUSIONS. There is an increased risk of falls and MVCs in patients with glaucoma.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The least-mean-square-type (LMS-type) algorithms are known as simple and effective adaptation algorithms. However, the LMS-type algorithms have a trade-off between the convergence rate and steady-state performance. In this paper, we investigate a new variable step-size approach to achieve fast convergence rate and low steady-state misadjustment. By approximating the optimal step-size that minimizes the mean-square deviation, we derive variable step-sizes for both the time-domain normalized LMS (NLMS) algorithm and the transform-domain LMS (TDLMS) algorithm. The proposed variable step-sizes are simple quotient forms of the filtered versions of the quadratic error and very effective for the NLMS and TDLMS algorithms. The computer simulations are demonstrated in the framework of adaptive system modeling. Superior performance is obtained compared to the existing popular variable step-size approaches of the NLMS and TDLMS algorithms. © 2014 Springer Science+Business Media New York.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A retrospective assessment of exposure to benzene was carried out for a nested case control study of lympho-haematopoietic cancers, including leukaemia, in the Australian petroleum industry. Each job or task in the industry was assigned a Base Estimate (BE) of exposure derived from task-based personal exposure assessments carried out by the company occupational hygienists. The BEs corresponded to the estimated arithmetic mean exposure to benzene for each job or task and were used in a deterministic algorithm to estimate the exposure of subjects in the study. Nearly all of the data sets underlying the BEs were found to contain some values below the limit of detection (LOD) of the sampling and analytical methods and some were very heavily censored; up to 95% of the data were below the LOD in some data sets. It was necessary, therefore, to use a method of calculating the arithmetic mean exposures that took into account the censored data. Three different methods were employed in an attempt to select the most appropriate method for the particular data in the study. A common method is to replace the missing (censored) values with half the detection limit. This method has been recommended for data sets where much of the data are below the limit of detection or where the data are highly skewed; with a geometric standard deviation of 3 or more. Another method, involving replacing the censored data with the limit of detection divided by the square root of 2, has been recommended when relatively few data are below the detection limit or where data are not highly skewed. A third method that was examined is Cohen's method. This involves mathematical extrapolation of the left-hand tail of the distribution, based on the distribution of the uncensored data, and calculation of the maximum likelihood estimate of the arithmetic mean. When these three methods were applied to the data in this study it was found that the first two simple methods give similar results in most cases. Cohen's method on the other hand, gave results that were generally, but not always, higher than simpler methods and in some cases gave extremely high and even implausible estimates of the mean. It appears that if the data deviate substantially from a simple log-normal distribution, particularly if high outliers are present, then Cohen's method produces erratic and unreliable estimates. After examining these results, and both the distributions and proportions of censored data, it was decided that the half limit of detection method was most suitable in this particular study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: This study investigated 5-year trends in body weight, overweight and obesity and their association with sociodemographic variables in a large, multi-ethnic community sample of Australian adults.  Design: This prospective population study used baseline and 5-year follow-up data from participants in the Melbourne Collaborative Cohort Study (MCCS). Setting: Population study in Melbourne, Australia. Subjects: In total, 12 125 men and 17 674 women aged 35–69 years at baseline. Results: Mean 5-year weight change in this sample was +1.58 (standard deviation (SD) 4.82) kg for men and +2.42 (SD 5.17) kg for women. Younger (35–44 years) men and, in particular, women gained more weight than older adults and were at highest risk of major weight gain ($5 kg) and becoming overweight. Risk of major weight gain and associations between demographic variables and weight change did not vary greatly by ethnicity. Education level showed complex associations with weight outcomes that differed by sex and ethnicity. Multivariate analyses showed that, among men, higher initial body weight was associated with decreased likelihood of major weight gain, whereas among women, those initially overweight or obese were about 20% more likely to experience major weight gain than underweight or healthy weight women. Conclusions: Findings of widespread weight gain across this entire population sample, and particularly among younger women and women who were already overweight, are a cause for alarm. The prevention of weight gain and obesity across the entire population should be an urgent public health priority. Young-to-mid adulthood appears to be a critical time to intervene to prevent future weight gain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nuclear exclusion appears in all general insurance policies. Since its introduction to Australia and New Zealand in the 1960s this exclusion has seen almost no change. So what are the reasons for this article? There are two reasons. First, there has been a misunderstanding on the part of some in the industry about the scope of this exclusion. This results in unnecessary alterations to the policy. The other is that a new wording is emerging some sections of the market which could be tar-reaching in its effect. The purpose of this article is to examine several aspects related to the exclusion. The first section examines the nature and extent of exposures in relation to radiation and nuclear energy and serves as background to under standing the exclusion wording. Section two provides the reasons for the inclusion of the clause and its historical origins. Section three addresses the intended scope of the current exclusion and the final section examines the scope of a new wording that is appearing and the possible implications that may result.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Soil and Water Assessment Tool (SWAT) is a hydrologic model that was developed to predict the long-term impacts of land use change on the water balance of large catchments. Stochastic models are used to generate the daily rainfall sequences needed to conduct long-term, continuous simulations with SWAT. The objective of this study was to evaluate the performances of three daily rainfall generation models. The models evaluated were the modified Daily and Monthly Mixed (DMMm) model, skewed normal distribution (SKWD) model and modified exponential distribution (EXPD) model. The study area was the Woady Yaloak River catchment (306 km2) located in southwest Victoria, Australia. The models were assessed on their ability to preserve annual, monthly and daily statistical characteristics of the historical rainfall and runoff. The mean annual, monthly, and daily rainfall was preserved satisfactorily by the models. The DMMm model reproduced the standard deviation of annual and monthly rainfall better than the SKWD and EXPD models. Overall, the DMMm model performed marginally better than the SKWD model at reproducing the statistical characteristics of the historical rainfall record at the various time scales. The performance of the EXPD model was found to be inferior to the performances of the DMMm and SKWD models. The models reproduced the mean annual, monthly, and daily runoff relatively well, although the DMMm and SKWD models were found to preserve these statistics marginally better than the EXPD model. None of the models managed to reproduce the standard deviation of annual, monthly, and daily runoff adequately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the many valuable uses of injury surveillance is the potential to alert health authorities and societies in general to emerging injury trends, facilitating earlier development of prevention measures. Other than road safety, to date, few attempts to forecast injury data have been made, although forecasts have been made of other public health issues. This may in part be due to the complex pattern of variance displayed by injury data. The profile of many injury types displays seasonality and diurnal variance, as well as stochastic variance. The authors undertook development of a simple model to forecast injury into the near term. In recognition of the large numbers of possible predictions, the variable nature of injury profiles and the diversity of dependent variables, it became apparent that manual forecasting was impractical. Therefore, it was decided to evaluate a commercially available forecasting software package for prediction accuracy against actual data for a set of predictions. Injury data for a 4-year period (1996 to 1999) were extracted from the Victorian Emergency Minimum Dataset and were used to develop forecasts for the year 2000, for which data was also held. The forecasts for 2000 were compared to the actual data for 2000 by independent t-tests, and the standard errors of the predictions were modelled by stepwise hierarchical multiple regression using the independent variables of the standard deviation, seasonality, mean monthly frequency and slope of the base data (R = 0.93, R2 = 0.86, F(3, 27) = 55.2, p < 0.0001). Significant contributions to the model included the SD (β = 1.60, p < 0.001), mean monthly frequency (β =  - 0.72, p < 0.002), and the seasonality of the data (β = 0.16, p < 0.02). It was concluded that injury data could be reliably forecast and that commercial software was adequate for the task. Variance in the data was found to be the most important determinant of prediction accuracy. Importantly, automated forecasting may provide a vehicle for identifying emerging trends.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article challenges the conventional narratives of Australian popular music history, recognising this as an element of a wider cultural history, using the songwriting career of Johnny Young in the late 1960s and early 1970s. In doing so it refers to the actual song content; ways in which songs and their performers were written about at the time they were released; and the way in which these works have subsequently been regarded and discussed in the conventional historical narrative. It also suggests that Young's own crafted persona, as well as the way he and pop music are typically regarded, have veiled the innovative and radical elements of some of his songs, not only the very well-known 'The real thing', but also hits such as, 'Smiley' and 'The star' .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This short paper explores the relevance of the scholarship of teaching to advancing Deakin University’s mission and core commitments, to teaching and learning and to its staff. The concept of the scholarship of teaching is defined and a discussion of the relevance of the concept to Deakin is then presented. Some broad guiding principles for implementation are offered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in high throughput experiments and annotations via published literature have provided a wealth of interaction maps of several biomolecular networks, including metabolic, protein-protein, and protein-DNA interaction networks. The architecture of these molecular networks reveals important principles of cellular organization and molecular functions. Analyzing such networks, i.e., discovering dense regions in the network, is an important way to identify protein complexes and functional modules. This task has been formulated as the problem of finding heavy subgraphs, the Heaviest k-Subgraph Problem (k-HSP), which itself is NPhard. However, any method based on the k-HSP requires the parameter k and an exact solution of k-HSP may still end up as a “spurious” heavy subgraph, thus reducing its practicability in analyzing large scale biological networks. We proposed a new formulation, called the rank-HSP, and two dynamical systems to approximate its results. In addition, a novel metric, called the Standard deviation and Mean Ratio (SMR), is proposed for use in “spurious” heavy subgraphs to automate the discovery by setting a fixed threshold. Empirical results on both the simulated graphs and biological networks have demonstrated the efficiency and effectiveness of our proposal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background
Little evidence exists to describe expected volumes of chest tube (CT) drainage after coronary artery bypass grafting (CABG).

Objectives
The study objective was to map the trajectory of CT drainage volumes from insertion to removal after CABG.

Design

This was a retrospective, descriptive study.
Patients
The study included 239 patients who underwent CABG at a single metropolitan hospital in Melbourne, Australia.

Results
The sample (N = 234), aged 68.7 years (standard deviation [SD] 9.9), was predominantly male (N = 185, 79.1%). The mean duration of CT insertion was 45.2 hours (SD 26.7), and total drainage volume was 1300.6 mL (SD 763.8). Drainage volumes plateau to 31 mL per hour, 8 hours after surgery. From 24 to 48 hours, the mean drainage was 21 mL per hour. Drainage volumes varied between genders.

Conclusions
Evidence of similar drainage patterns in other populations is difficult to locate. If the pattern of drainage shown in this study is consistent, experimental intervention studies comparing standard removal time and earlier removal are recommended. If not, prospective collection of relevant preoperative, intraoperative, and postoperative factors across multiple sites is necessary to determine which patient or practice variations influence CT drainage patterns after CABG.