995 resultados para Jason
Resumo:
Alexithymia is characterised by deficits in emotional insight and self reflection, that impact on the efficacy of psychological treatments. Given the high prevalence of alexithymia in Alcohol Use Disorders, valid assessment tools are critical. The majority of research on the relationship between alexithymia and alcohol-dependence has employed the self-administered Toronto Alexithymia Scale (TAS-20). The Observer Alexithymia Scale (OAS) has also been recommended. The aim of the present study was to assess the validity and reliability of the OAS and the TAS-20 in an alcohol-dependent sample. Two hundred and ten alcohol-dependent participants in an outpatient Cognitive Behavioral Treatment program were administered the TAS-20 at assessment and upon treatment completion at 12 weeks. Clinical psychologists provided observer assessment data for a subsample of 159 patients. The findings confirmed acceptable internal consistency, test-retest reliability and scale homogeneity for both the OAS and TAS-20, except for the low internal consistency of the TAS-20 EOT scale. The TAS-20 was more strongly associated with alcohol problems than the OAS.
Resumo:
This article is an abbreviated version of a debate between two economists holding somewhat different perspectives on the nature of non-market production in the space of new digital media. While the ostensible focus here is on the role of markets in the innovation of new technologies to create new economic value, this context also serves to highlight the private and public value of digital literacy.
Resumo:
The field was the curation of cross-cultural new media/ digital media practices within large-scale exhibition practices in China. The context was improved understandings of the intertwining of the natural and the artificial with respect to landscape and culture, and their consequent effect on our contemporary globalised society. The research highlighted new languages of media art with respect to landscape and their particular underpinning dialects. The methodology was principally practice-led. --------- The research brought together over 60 practitioners from both local and diasporic Asian, European and Australian cultures for the first time within a Chinese exhibition context. Through pursuing a strong response to both cultural displacement and re-identification the research forged and documented an enduring commonality within difference – an agenda further concentrated through sensitivities surrounding that year’s Beijing’s Olympics. In contrast to the severe threats posed to the local dialects of many of the world’s spoken and written languages the ‘Vernacular Terrain’ project evidenced that many local creative ‘dialects’ of the environment-media art continuum had indeed survived and flourished. --------- The project was co-funded by the Beijing Film Academy, QUT Precincts, IDAProjects and Platform China Art Institute. A broad range of peer-reviewed grants was won including from the Australia China Council and the Australian Embassy in China. Through invitations from external curators much of the work then traveled to other venues including the Block Gallery at QUT and the outdoor screens at Federation Square, Melbourne. The Vernacular Terrain catalogue featured a comprehensive history of the IDA project from 2000 to 2008 alongside several major essays. Due to the reputation IDA Projects had established, the team were invited to curate a major exhibition showcasing fifty new media artists: The Vernacular Terrain, at the prestigious Songzhang Art Museum, Beijing in Dec 07-Jan 2008. The exhibition was designed for an extensive, newly opened gallery owned by one of China's most important art historians Li Xian Ting. This exhibition was not only this gallery’s inaugural non-Chinese curated show but also the Gallery’s first new media exhibition. It included important works by artists such as Peter Greenway, Michael Roulier, Maleonn and Cui Xuiwen. --------- Each artist was chosen both for a focus upon their own local environmental concerns as well as their specific forms of practice - that included virtual world design, interactive design, video art, real time and manipulated multiplayer gaming platforms and web 2.0 practices. This exhibition examined the interconnectivities of cultural dialogue on both a micro and macro scale; incorporating the local and the global, through display methods and design approaches that stitched these diverse practices into a spatial map of meanings and conversations. By examining the contexts of each artist’s practice in relationship to the specificity of their own local place and prevailing global contexts the exhibition sought to uncover a global vernacular. Through pursuing this concentrated anthropological direction the research identified key themes and concerns of a contextual language that was clearly underpinned by distinctive local ‘dialects’ thereby contributing to a profound sense of cross-cultural association. Through augmentation of existing discourse the exhibition confirmed the enduring relevance and influence of both localized and globalised languages of the landscape-technology continuum.
Resumo:
This special issue of Innovation : Management, Policy & Practice (also released as a book: ISBN 978-1-921348-31-0) will explore some empirical and analytic connections between creative industries and innovation policy. Seven papers are presented. The first four are empirical, providing analysis of large and/or detailed data sets on creative industries businesses and occupations to discern their contribution to innovation. The next three papers focus on comparative and historical policy analysis, connecting creative industries policy (broadly considered, including media, arts and cultural policy) and innovation policy. To introduce this special issue I want to review the arguments connecting the statistical, conceptual and policy neologism of ‘creative industries’ to: (1) the elements of a national innovation system; and (2) to innovation policy. In approaching this connection, two overarching issues arise.
Resumo:
New air traffic automated separation management concepts are constantly under investigation. Yet most of the automated separation management algorithms proposed over the last few decades have assumed either perfect communication or exact knowledge of all aircraft locations. In realistic environments, these idealized assumptions are not valid and any communication failure can potentially lead to disastrous outcomes. This paper examines the separation performance behavior of several popular algorithms during periods of information loss. This comparison is done through simulation studies. These simulation studies suggest that communication failure can cause the performance of these separation management algorithms to degrade significantly. This paper also describes some preliminary flight tests.
Resumo:
US state-based data breach notification laws have unveiled serious corporate and government failures regarding the security of personal information. These laws require organisations to notify persons who may be affected by an unauthorized acquisition of their personal information. Safe harbours to notification exist if personal information is encrypted. Three types of safe harbour have been identified in the literature: exemptions, rebuttable presumptions and factors. The underlying assumption of exemptions is that encrypted personal information is secure and therefore unauthorized access does not pose a risk. However, the viability of this assumption is questionable when examined against data breaches involving encrypted information and the demanding practical requirements of effective encryption management. Recent recommendations by the Australian Law Reform Commission (ALRC) would amend the Privacy Act 1988 (Cth) to implement a data breach scheme that includes a different type of safe harbour, factor based analysis. The authors examine the potential capability of the ALRC’s proposed encryption safe harbour in relation to the US experience at the state legislature level.
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of the sensors involved (as opposed to radar). This paper describes the development and evaluation of a vision-based collision detection algorithm suitable for fixed-wing aerial robotics. The system was evaluated using highly realistic vision data of the moments leading up to a collision. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We make use of the enormous potential of graphic processing units to achieve processing rates of 30Hz (for images of size 1024-by- 768). Currently, integration in the final platform is under way.
Resumo:
This paper proposes a novel automated separation management concept in which onboard decision support is integrated within a centralised air traffic separation management system. The onboard decision support system involves a decentralised separation manager that can overrule air traffic management instructions under certain circumstances. This approach allows the advantages of both centralised and decentralised concepts to be combined (and disadvantages of each separation management approach to be mitigated). Simulation studies are used to illustrate the potential benefits of the combined separation management concept.
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and TCAS). This paper describes the development and evaluation of a real-time vision-based collision detection system suitable for fixed-wing aerial robotics. Using two fixed-wing UAVs to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercially-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1024 by 768 pixel image frames at a rate of approximately 30Hz. Flight trials using manned Cessna aircraft where all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
This paper proposes a security architecture for the basic cross indexing systems emerging as foundational structures in current health information systems. In these systems unique identifiers are issued to healthcare providers and consumers. In most cases, such numbering schemes are national in scope and must therefore necessarily be used via an indexing system to identify records contained in pre-existing local, regional or national health information systems. Most large scale electronic health record systems envisage that such correlation between national healthcare identifiers and pre-existing identifiers will be performed by some centrally administered cross referencing, or index system. This paper is concerned with the security architecture for such indexing servers and the manner in which they interface with pre-existing health systems (including both workstations and servers). The paper proposes two required structures to achieve the goal of a national scale, and secure exchange of electronic health information, including: (a) the employment of high trust computer systems to perform an indexing function, and (b) the development and deployment of an appropriate high trust interface module, a Healthcare Interface Processor (HIP), to be integrated into the connected workstations or servers of healthcare service providers. This proposed architecture is specifically oriented toward requirements identified in the Connectivity Architecture for Australia’s e-health scheme as outlined by NEHTA and the national e-health strategy released by the Australian Health Ministers.
Resumo:
Confirmatory factor analyses were conducted to evaluate the factorial validity of the Toronto Alexithymia Scale in an alcohol-dependent sample. Several factor models were examined, but all models were rejected given their poor fit. A revision of the TAS-20 in alcohol-dependent populations may be needed.
Communicating with first year students ; so many channels but is anyone listening? A practice report
Resumo:
Communicating with first year students has become a far more complex prospect in the digital age. There is a lot of competition for limited attentional resources from media sources in almost endless channels. Getting important messages to students when there is so much competing information is a difficult prospect for academic and professional divisions of the university alike. Students’ preferences for these communication channels are not well understood and are constantly changing with the introduction of new technology. A first year group was surveyed about their use and preference for various sources of information. Students were generally positive about the use of social networking and other new online media but strongly preferred more established channels for official academic and administrative information. A discussion of the findings and recommendations follows.
Resumo:
Over the years, approaches to obesity prevention and treatment have gone from focusing on genetic and other biological factors to exploring a diversity of diets and individual behavior modification interventions anchored primarily in the power of the mind, to the recent shift focusing on societal interventions to design ";temptation-proof"; physical, social, and economic environments. In spite of repeated calls to action, including those of the World Health Organization (WHO), the pandemic continues to progress. WHO recently projected that if the current lifestyle trend in young and adult populations around the world persist, by 2012 in countries like the USA, health care costs may amount to as much as 17.7% of the GDP. Most importantly, in large part due to the problems of obesity, those children may be the first generation ever to have a shorter life expectancy than that of their parents. This work presents the most current research and proposals for addressing the pandemic. Past studies have focused primarly on either genetic or behavioral causes for obesity, however today's research indicates that a strongly integrated program is the best prospect for success in overcoming obesity. Furthermore, focus on the role of society in establishing an affordable, accessible and sustainable program for implementing these lifestyle changes is vital, particularly for those in economically challenged situations, who are ultimately at the highest risk for obesity. Using studies from both neuroscience and behavioral science to present a comprehensive overview of the challenges and possible solutions, The brain-to-society approach to obesity prevention focuses on what is needed in order to sustain a healthy, pleasurable and affordable lifestyle.
Resumo:
Knowledge of the regulation of food intake is crucial to an understanding of body weight and obesity. Strictly speaking, we should refer to the control of food intake whose expression is modulated in the interests of the regulation of body weight. Food intake is controlled, body weight is regulated. However, this semantic distinction only serves to emphasize the importance of food intake. Traditionally food intake has been researched within the homeostatic approach to physiological systems pioneered by Claude Bernard, Walter Cannon and others; and because feeding is a form of behaviour, it forms part of what Curt Richter referred to as the behavioural regulation of body weight (or behavioural homeostasis). This approach views food intake as the vehicle for energy supply whose expression is modulated by a metabolic drive generated in response to a requirement for energy. The idea was that eating behaviour is stimulated and inhibited by internal signalling systems (for the drive and suppression of eating respectively) in order to regulate the internal environment (energy stores, tissue needs).