209 resultados para Arc measures
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
Lack of detailed and accurate safety records on incidents in Australian work zones prevents a thorough understanding of the relevant risks and hazards. Consequently it is difficult to select appropriate treatments for improving the safety of roadworkers and motorists alike. This paper outlines development of a conceptual framework for making informed decisions about safety treatments by: 1) identifying safety issues and hazards in work zones; 2) understanding the attitudes and perceptions of both roadworkers and motorists; 3) reviewing the effectiveness of work zone safety treatments according to existing research, and; 4) incorporating local expert opinion on the feasibility and usefulness of the safety treatments. Using data collected through semi-structured interviews with roadwork personnel and online surveys of Queensland drivers, critical safety issues were identified. The effectiveness of treatments for addressing the issues was understood through rigorous literature review and consultations with local road authorities. Promising work zone safety treatments include enforcement, portable rumble strips, perceptual measures to imply reduced lane width, automated or remotely-operated traffic lights, end of queue measures, and more visible and meaningful signage.
Resumo:
As negative employee attitudes towards alcohol and other drug (AOD) policies may have serious consequences for organizations, the present study examined demographic and attitudinal dimensions leading to employees’ perceptions of AOD policy effectiveness. Survey responses were obtained from 147 employees in an Australian agricultural organization. Three dimensions of attitudes towards AOD policies were examined: knowledge of policy features, attitudes towards testing, and preventative measures such as job design and organizational involvement in community health. Demographic differences were identified, with males and blue-collar employees reporting significantly more negative attitudes towards the AOD policy. Attitude dimensions were stronger predictors of perceptions of policy effectiveness than demographics, and the strongest predictor was preventative measures. This suggests that organizations should do more than design adequate and fair AOD policies, and take a more holistic approach to AOD impairment by engaging in workplace design to reduce AOD use and promote a consistent health message to employees and the community.
Resumo:
The 2014 federal budget implemented a so-called crackdown on what Minister for Social Services Kevin Andrews calls young people who are content to “sit on the couch at home and pick up a welfare cheque”. The crackdown will change access to income support for people under 30 years of age. From January 1 2015, all young people seeking Newstart Allowance and Youth Allowance for the first time will be required to “demonstrate appropriate job search and participation in employment services support for six months before receiving payments”. Upon qualifying, recipients must then spend 25 hours per week in Work for the Dole in order to receive income support for a six-month period. What happens beyond this six months is unclear. What is clear is that these policy changes, together with the Minister’s accompanying statements, are informed by a deficit view of disadvantaged youth. It is a view that demonstrates how little politicians know or understand about these young peoples’ past circumstances.
Resumo:
This paper introduces a new methodology for analyzing and measuring engagement with television content by users of Twitter. Drawing on factors such as the network, viewing audience, and date of broadcast to establish a baseline expectation for volume of tweets around a television show, and applying techniques from the field of sabermetrics to create neutral volume figures (‘weighted tweets’) which exclude these variables, our metrics provide new insights into television’s social media presence. The methodology provides a variety of new measures for analysing the social media strategies of individual television programs, channels and networks, for comparing users’ engagement with programs, channels or networks, and for predicting future volumes of tweets.
Resumo:
Traditional text classification technology based on machine learning and data mining techniques has made a big progress. However, it is still a big problem on how to draw an exact decision boundary between relevant and irrelevant objects in binary classification due to much uncertainty produced in the process of the traditional algorithms. The proposed model CTTC (Centroid Training for Text Classification) aims to build an uncertainty boundary to absorb as many indeterminate objects as possible so as to elevate the certainty of the relevant and irrelevant groups through the centroid clustering and training process. The clustering starts from the two training subsets labelled as relevant or irrelevant respectively to create two principal centroid vectors by which all the training samples are further separated into three groups: POS, NEG and BND, with all the indeterminate objects absorbed into the uncertain decision boundary BND. Two pairs of centroid vectors are proposed to be trained and optimized through the subsequent iterative multi-learning process, all of which are proposed to collaboratively help predict the polarities of the incoming objects thereafter. For the assessment of the proposed model, F1 and Accuracy have been chosen as the key evaluation measures. We stress the F1 measure because it can display the overall performance improvement of the final classifier better than Accuracy. A large number of experiments have been completed using the proposed model on the Reuters Corpus Volume 1 (RCV1) which is important standard dataset in the field. The experiment results show that the proposed model has significantly improved the binary text classification performance in both F1 and Accuracy compared with three other influential baseline models.
Resumo:
- Objective To investigate if parental disapproval of alcohol use accounts for differences in adolescent alcohol use across regional and urban communities. - Design Secondary data analysis of grade-level stratified data from a random sample of schools. - Setting High schools in Victoria, Australia. - Participants A random sample of 10273 adolescents from Grade 7 (mean age=12.51 years), 9 (14.46 years) and 11 (16.42 years). - Main outcome measures The key independent variables were parental disapproval of adolescent alcohol use and regionality (regional/ urban), and the dependent variable was past 30 days alcohol use. - Results After adjusting for potential confounders, adolescents in regional areas were more likely to use alcohol in the past 30 days (OR=1.83, 1.44 and 1.37 for Grades 7, 9 and 11, respectively, P<0.05), and their parents have a lower level of disapproval of their alcohol use (b=-0.12, -0.15 and -0.19 for Grades 7, 9 and 11, respectively, P<0.001). Bootstrapping analyses suggested that 8.37%, 23.30% and 39.22% of the effect of regionality on adolescent alcohol use was mediated by parental disapproval of alcohol use for Grades 7, 9 and 11 participants respectively (P<0.05). - Conclusions Adolescents in urban areas had a lower risk of alcohol use compared with their regional counterparts, and differences in parental disapproval of alcohol use contributed to this difference.
Resumo:
The Wechsler and Stanford Binet scales are among the most commonly used tests of intelligence. In clinical practice, they often seem to be used interchangeably. This paper reports the results of two studies that compared the most recent editions of two Wechsler scales (WPPSI-III and WISC-IV) with the Stanford-Binet Fifth Edition (SB5). The participants in the first study were 36 typically developing 4-year-old children who completed the WPPSI-III and SB5 in counter-balanced order. Although correlations of composite scores ranged from r = .59 to r = .82 and were similar to those reported for earlier versions of the two instruments, more than half the sample had a score discrepancy greater than 10 points across the two instruments. In the second study, the WISC-IV and SB5 were administered to 30 children aged 12-14 years. There was a significant difference between Full Scale IQs on the two measures, with scores being higher on the WISC-IV. Differences between the two verbal scales were also significant and favoured the WISC-IV. There were moderate correlations of Full Scale IQs (r = .58) and Nonverbal IQs (r = .54) but the relationship between the two Verbal scales was not significant. For some children, notable score differences led to different categorisations of their level of intellectual ability The findings suggest that the Wechsler and Stanford Binet scales cannot be presumed to be interchangeable. The discussion focuses on how psychologists might reconcile large differences in test scores and the need for caution when interpreting and comparing test results.
Resumo:
Uncorrected refractive error, including astigmatism, is a leading cause of reversible visual impairment. While the ability to perform vision-related daily activities is reduced when people are not optimally corrected, only limited research has investigated the impact of uncorrected astigmatism. Given the capacity to perform vision-related daily activities involves integration of a range of visual and cognitive cues, this research examined the impact of simulated astigmatism on visual tasks that also involved cognitive input. The research also examined whether the higher levels of complexity inherent in Chinese characters makes them more susceptible to the effects of astigmatism. The effects of different powers of astigmatism, as well as astigmatism at different axes were investigated in order to determine the minimum level of astigmatism that resulted in a decrement in visual performance.
Resumo:
Enterprise Architecture Management (EAM) is discussed in academia and industry as a vehicle to guide IT implementations, alignment, compliance assessment, or technology management. Still, a lack of knowledge prevails about how EAM can be successfully used, and how positive impact can be realized from EAM. To determine these factors, we identify EAM success factors and measures through literature reviews and exploratory interviews and propose a theoretical model that explains key factors and measures of EAM success. We test our model with data collected from a cross-sectional survey of 133 EAM practitioners. The results confirm the existence of an impact of four distinct EAM success factors, ‘EAM product quality’, ‘EAM infrastructure quality’, ‘EAM service delivery quality’, and ‘EAM organizational anchoring’, and two important EAM success measures, ‘intentions to use EAM’ and ‘Organizational and Project Benefits’ in a confirmatory analysis of the model. We found the construct ‘EAM organizational anchoring’ to be a core focal concept that mediated the effect of success factors such as ‘EAM infrastructure quality’ and ‘EAM service quality’ on the success measures. We also found that ‘EAM satisfaction’ was irrelevant to determining or measuring success. We discuss implications for theory and EAM practice.
Resumo:
The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.
Resumo:
Objective The aim of this systematic review and meta-analysis was to determine the overall effect of resistance training (RT) on measures of muscular strength in people with Parkinson’s disease (PD). Methods Controlled trials with parallel-group-design were identified from computerized literature searching and citation tracking performed until August 2014. Two reviewers independently screened for eligibility and assessed the quality of the studies using the Cochrane risk-of-bias-tool. For each study, mean differences (MD) or standardized mean differences (SMD) and 95% confidence intervals (CI) were calculated for continuous outcomes based on between-group comparisons using post-intervention data. Subgroup analysis was conducted based on differences in study design. Results Nine studies met the inclusion criteria; all had a moderate to high risk of bias. Pooled data showed that knee extension, knee flexion and leg press strength were significantly greater in PD patients who undertook RT compared to control groups with or without interventions. Subgroups were: RT vs. control-without-intervention, RT vs. control-with-intervention, RT-with-other-form-of-exercise vs. control-without-intervention, RT-with-other-form-of-exercise vs. control-with-intervention. Pooled subgroup analysis showed that RT combined with aerobic/balance/stretching exercise resulted in significantly greater knee extension, knee flexion and leg press strength compared with no-intervention. Compared to treadmill or balance exercise it resulted in greater knee flexion, but not knee extension or leg press strength. RT alone resulted in greater knee extension and flexion strength compared to stretching, but not in greater leg press strength compared to no-intervention. Discussion Overall, the current evidence suggests that exercise interventions that contain RT may be effective in improving muscular strength in people with PD compared with no exercise. However, depending on muscle group and/or training dose, RT may not be superior to other exercise types. Interventions which combine RT with other exercise may be most effective. Findings should be interpreted with caution due to the relatively high risk of bias of most studies.
Resumo:
The human connectome has recently become a popular research topic in neuroscience, and many new algorithms have been applied to analyze brain networks. In particular, network topology measures from graph theory have been adapted to analyze network efficiency and 'small-world' properties. While there has been a surge in the number of papers examining connectivity through graph theory, questions remain about its test-retest reliability (TRT). In particular, the reproducibility of structural connectivity measures has not been assessed. We examined the TRT of global connectivity measures generated from graph theory analyses of 17 young adults who underwent two high-angular resolution diffusion (HARDI) scans approximately 3 months apart. Of the measures assessed, modularity had the highest TRT, and it was stable across a range of sparsities (a thresholding parameter used to define which network edges are retained). These reliability measures underline the need to develop network descriptors that are robust to acquisition parameters.
Resumo:
Recent advances in diffusion-weighted MRI (DWI) have enabled studies of complex white matter tissue architecture in vivo. To date, the underlying influence of genetic and environmental factors in determining central nervous system connectivity has not been widely studied. In this work, we introduce new scalar connectivity measures based on a computationally-efficient fast-marching algorithm for quantitative tractography. We then calculate connectivity maps for a DTI dataset from 92 healthy adult twins and decompose the genetic and environmental contributions to the variance in these metrics using structural equation models. By combining these techniques, we generate the first maps to directly examine genetic and environmental contributions to brain connectivity in humans. Our approach is capable of extracting statistically significant measures of genetic and environmental contributions to neural connectivity.
Resumo:
A key question in diffusion imaging is how many diffusion-weighted images suffice to provide adequate signal-to-noise ratio (SNR) for studies of fiber integrity. Motion, physiological effects, and scan duration all affect the achievable SNR in real brain images, making theoretical studies and simulations only partially useful. We therefore scanned 50 healthy adults with 105-gradient high-angular resolution diffusion imaging (HARDI) at 4T. From gradient image subsets of varying size (6 ≤ N ≤ 94) that optimized a spherical angular distribution energy, we created SNR plots (versus gradient numbers) for seven common diffusion anisotropy indices: fractional and relative anisotropy (FA, RA), mean diffusivity (MD), volume ratio (VR), geodesic anisotropy (GA), its hyperbolic tangent (tGA), and generalized fractional anisotropy (GFA). SNR, defined in a region of interest in the corpus callosum, was near-maximal with 58, 66, and 62 gradients for MD, FA, and RA, respectively, and with about 55 gradients for GA and tGA. For VR and GFA, SNR increased rapidly with more gradients. SNR was optimized when the ratio of diffusion-sensitized to non-sensitized images was 9.13 for GA and tGA, 10.57 for FA, 9.17 for RA, and 26 for MD and VR. In orientation density functions modeling the HARDI signal as a continuous mixture of tensors, the diffusion profile reconstruction accuracy rose rapidly with additional gradients. These plots may help in making trade-off decisions when designing diffusion imaging protocols.