140 resultados para Face processing research
Resumo:
Few studies have investigated iatrogenic outcomes from the viewpoint of patient experience. To address this anomaly, the broad aim of this research is to explore the lived experience of patient harm. Patient harm is defined as major harm to the patient, either psychosocial or physical in nature, resulting from any aspect of health care. Utilising the method of Consensual Qualitative Research (CQR), in-depth interviews are conducted with twenty-four volunteer research participants who self-report having been severely harmed by an invasive medical procedure. A standardised measure of emotional distress, the Impact of Event Scale (IES), is additionally employed for purposes of triangulation. Thematic analysis of transcript data indicate numerous findings including: (i) difficulties regarding patients‘ prior understanding of risks involved with their medical procedure; (ii) the problematic response of the health system post-procedure; (iii) multiple adverse effects upon life functioning; (iv) limited recourse options for patients; and (v) the approach desired in terms of how patient harm should be systemically handled. In addition, IES results indicate a clinically significant level of distress in the sample as a whole. To discuss findings, a cross-disciplinary approach is adopted that draws upon sociology, medicine, medical anthropology, psychology, philosophy, history, ethics, law, and political theory. Furthermore, an overall explanatory framework is proposed in terms of the master themes of power and trauma. In terms of the theme of power, a postmodernist analysis explores the politics of patient harm, particularly the dynamics surrounding the politics of knowledge (e.g., notions of subjective versus objective knowledge, informed consent, and open disclosure). This analysis suggests that patient care is not the prime function of the health system, which appears more focussed upon serving the interests of those in the upper levels of its hierarchy. In terms of the master theme of trauma, current understandings of posttraumatic stress disorder (PTSD) are critiqued, and based on data from this research as well as the international literature, a new model of trauma is proposed. This model is based upon the principle of homeostasis observed in biology, whereby within every cell or organism a state of equilibrium is sought and maintained. The proposed model identifies several bio-psychosocial markers of trauma across its three main phases. These trauma markers include: (i) a profound sense of loss; (ii) a lack of perceived control; (iii) passive trauma processing responses; (iv) an identity crisis; (v) a quest to fully understand the trauma event; (vi) a need for social validation of the traumatic experience; and (vii) posttraumatic adaption with the possibility of positive change. To further explore the master themes of power and trauma, a natural group interview is carried out at a meeting of a patient support group for arachnoiditis. Observations at this meeting and members‘ stories in general support the homeostatic model of trauma, particularly the quest to find answers in the face of distressing experience, as well as the need for social recognition of that experience. In addition, the sociopolitical response to arachnoiditis highlights how public domains of knowledge are largely constructed and controlled by vested interests. Implications of the data overall are discussed in terms of a cultural revolution being needed in health care to position core values around a prime focus upon patients as human beings.
Resumo:
Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.
Resumo:
The processes used in Australian universities for reviewing the ethics of research projects are based on the traditions of research and practice from the medical and health sciences. The national guidelines for ethical conduct in research are heavily based on presumptions that the researcher–participant relationship is similar to a doctor–patient relationship. The National Health and Medical Research Council, Australian Research Council and Australian Vice-Chancellors’ Committee have made a laudable effort to fix this problem by releasing the National Statement on Ethical Conduct in Human Research in 2007, to replace the 1999 National Statement on Ethical Conduct in Research Involving Humans. The new statement better encompasses the needs of the humanities, social sciences and creative industries. However, this paper argues that the revised National Statement and ethical review processes within universities still do not fully encompass the definitions of ‘research’ and the requirements, traditions, codes of practice and standards of the humanities, social sciences and creative industries. The paper argues that scholars within these disciplines often lack the language to articulate their modes of practice and risk management strategies to university-level ethics committees. As a consequence, scholars from these disciplines may find their research is delayed or stymied. The paper focuses on creative industries researchers, and explores the issues that they face in managing the ethical review process, particularly when engaging in practice-based research. Although the focus is on the creative industries, the issues are relevant to most fields in the humanities and social sciences.
Resumo:
Gray‘s (2000) revised Reinforcement Sensitivity Theory (r-RST) was used to investigate personality effects on information processing biases to gain-framed and loss-framed anti-speeding messages and the persuasiveness of these messages. The r-RST postulates that behaviour is regulated by two major motivational systems: reward system or punishment system. It was hypothesised that both message processing and persuasiveness would be dependent upon an individual‘s sensitivity to reward or punishment. Student drivers (N = 133) were randomly assigned to view one of four anti-speeding messages or no message (control group). Individual processing differences were then measured using a lexical decision task, prior to participants completing a personality and persuasion questionnaire. Results indicated that participants who were more sensitive to reward showed a marginally significant (p = .050) tendency to report higher intentions to comply with the social gain-framed message and demonstrate a cognitive processing bias towards this message, than those with lower reward sensitivity.
Resumo:
We propose an approach to employ eigen light-fields for face recognition across pose on video. Faces of a subject are collected from video frames and combined based on the pose to obtain a set of probe light-fields. These probe data are then projected to the principal subspace of the eigen light-fields within which the classification takes place. We modify the original light-field projection and found that it is more robust in the proposed system. Evaluation on VidTIMIT dataset has demonstrated that the eigen light-fields method is able to take advantage of multiple observations contained in the video.
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
Background: Integrating 3D virtual world technologies into educational subjects continues to draw the attention of educators and researchers alike. The focus of this study is the use of a virtual world, Second Life, in higher education teaching. In particular, it explores the potential of using a virtual world experience as a learning component situated within a curriculum delivered predominantly through face-to-face teaching methods. Purpose: This paper reports on a research study into the development of a virtual world learning experience designed for marketing students taking a Digital Promotions course. The experience was a field trip into Second Life to allow students to investigate how business branding practices were used for product promotion in this virtual world environment. The paper discusses the issues involved in developing and refining the virtual course component over four semesters. Methods: The study used a pedagogical action research approach, with iterative cycles of development, intervention and evaluation over four semesters. The data analysed were quantitative and qualitative student feedback collected after each field trip as well as lecturer reflections on each cycle. Sample: Small-scale convenience samples of second- and third-year students studying in a Bachelor of Business degree, majoring in marketing, taking the Digital Promotions subject at a metropolitan university in Queensland, Australia participated in the study. The samples included students who had and had not experienced the field trip. The numbers of students taking part in the field trip ranged from 22 to 48 across the four semesters. Findings and Implications: The findings from the four iterations of the action research plan helped identify key considerations for incorporating technologies into learning environments. Feedback and reflections from the students and lecturer suggested that an innovative learning opportunity had been developed. However, pedagogical potential was limited, in part, by technological difficulties and by student perceptions of relevance.
Resumo:
The 31st TTRA conference was held in California’s San Fernando Valley, home of Hollywood and Burbank’s movie and television studios. The twin themes of Hollywood and the new Millennium promised and delivered “something old, yet something new”. The meeting offered a historical summary, not only of the year in review but also of many features of travel research since the first literature in the field appeared in the 1970s. Also, the millennium theme set the scene for some stimulating and forward thinking discussions. The Hollywood location offered an opportunity to ponder on the value of the movie-induced tourism for Los Angeles, at a time when Hollywood Boulevard was in the midst of a much needed redevelopment programme. Hollywood Chamber of Commerce speaker Oscar Arslanian acknowledged that the face of the famous district had become tired, and that its ability to continue to attract visitors in the future lay in redeveloping its past heritage. In line with the Hollywood theme a feature of the conference was a series of six special sessions with “Stars of Travel Research”. These sessions featured: Clare Gunn, Stanley Plog, Charles Gouldner, John Hunt, Brent Ritchie, Geoffrey Crouch, Peter Williams, Douglas Frechtling, Turgut Var, Robert Christie-Mill, and John Crotts. Delegates were indeed privileged to hear from many of the pioneers of tourism research. Clare Gunn, Charles Goeldner, Turgut Var and Stanley Plog, for example, traced the history of different aspects of the tourism literature, and in line with the millennium theme, offered some thought provoking discussion on the future challenges facing tourism. These included; the commodotisation of airlines and destinations, airport and traffic congestion, environment sustainability responsibility and the looming burst of the baby-boomer bubble. Included in the conference proceedings are four papers presented by five of the “Stars”. Brent Ritchie and Geoffrey Crouch discuss the critical success factors for destinations, Clare Gunn shares his concerns about tourism being a smokestack industry, Doug Frechtling provides forecasts of outbound travel from 20 countries, and Charles Gouldner, who has attended all 31 TTRA conferences, reflects on the changes that have taken place in tourism research over 35 years...
Resumo:
The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
Background Scientific research is an essential component in guiding improvements in health systems. There are no studies examining the Sri Lankan medical research output at international level. The present study evaluated the Sri Lankan research performance in medicine as reflected by the research publications output between years 2000-2009. Methods This study was based on Sri Lankan medical research publication data, retrieved from the SciVerse Scopus® from January 2000 to December 2009. The process of article selection was as follows: Affiliation - 'Sri Lanka' or 'Ceylon', Publication year - 'January 2000 to December 2009' and Subject area - 'Life and Health Sciences'. The articles identified were classified according to disease, medical speciality, institutions, major international collaborators, authors and journals. Results Sri Lanka's cumulative medical publications output between years 2000-2009 was 1,740 articles published in 160 different journals. The average annual publication growth rate was 9.1%. Majority of the articles were published in 'International' (n = 950, 54.6%) journals. Most articles were descriptive studies (n = 611, 35.1%), letters (n-345, 19.8%) and case reports (n = 311, 17.9%). The articles were authored by 148 different Sri Lankan authors from 146 different institutions. The three most prolific local institutions were Universities of; Colombo (n = 547), Kelaniya (n = 246) and Peradeniya (n = 222). Eighty four countries were found to have published collaborative papers with Sri Lankan authors during the last decade. UK was the largest collaborating partner (n = 263, 15.1%). Malaria (n = 75), Diabetes Mellitus (n = 55), Dengue (n = 53), Accidental injuries (n = 42) and Lymphatic filariasis (n = 40) were the major diseases studied. The 1,740 publications were cited 9,708 times, with an average citation of 5.6 per paper. The most cited paper had 203 citations, while there were 597 publications with no citations. The Sri Lankan authors' contribution to the global medical research output during the last decade was only 0.086%. Conclusion The Sri Lankan medical research output during the last decade is only a small fraction of the global research output. There it is a necessity to setup an enabling environment for research, with a proper vision, support, funds and training. In addition, collaborations across the region need to be strengthened to face common regional health challenges. Keywords: Sri Lanka, Medical research, Publication, Analysis
Resumo:
Using Gray and McNaughton’s (2000) revised Reinforcement Sensitivity Theory (r-RST), we examined the influence of personality on processing of words presented in gain-framed and loss-framed anti-speeding messages and how the processing biases associated with personality influenced message acceptance. The r-RST predicts that the nervous system regulates personality and that behaviour is dependent upon the activation of the Behavioural Activation System (BAS), activated by reward cues and the Fight-Flight-Freeze System (FFFS), activated by punishment cues. According to r-RST, individuals differ in the sensitivities of their BAS and FFFS (i.e., weak to strong), which in turn leads to stable patterns of behaviour in the presence of rewards and punishments, respectively. It was hypothesised that individual differences in personality (i.e., strength of the BAS and the FFFS) would influence the degree of both message processing (as measured by reaction time to previously viewed message words) and message acceptance (measured three ways by perceived message effectiveness, behavioural intentions, and attitudes). Specifically, it was anticipated that, individuals with a stronger BAS would process the words presented in the gain-frame messages faster than those with a weaker BAS and individuals with a stronger FFFS would process the words presented in the loss-frame messages faster than those with a weaker FFFS. Further, it was expected that greater processing (faster reaction times) would be associated with greater acceptance for that message. Driver licence holding students (N = 108) were recruited to view one of four anti-speeding messages (i.e., social gain-frame, social loss-frame, physical gain-frame, and physical loss-frame). A computerised lexical decision task assessed participants’ subsequent reaction times to message words, as an indicator of the extent of processing of the previously viewed message. Self-report measures assessed personality and the three message acceptance measures. As predicted, the degree of initial processing of the content of the social gain-framed message mediated the relationship between the reward sensitive trait and message effectiveness. Initial processing of the physical loss-framed message partially mediated the relationship between the punishment sensitive trait and both message effectiveness and behavioural intention ratings. These results show that reward sensitivity and punishment sensitivity traits influence cognitive processing of gain-framed and loss-framed message content, respectively, and subsequently, message effectiveness and behavioural intention ratings. Specifically, a range of road safety messages (i.e., gain-frame and loss-frame messages) could be designed which align with the processing biases associated with personality and which would target those individuals who are sensitive to rewards and those who are sensitive to punishments.
Resumo:
Increased participation in the internet economy is actively encouraged and supported by all levels of government. Research to date clearly shows the positive impacts that increased internet access can bring, particularly for rural Australia. Meanwhile, for the most part, identification of any negative impacts of increased broadband access on existing and potential property uses is avoided. The aim of this article is to identify issues for property use arising as a consequence of increased engagement in the internet economy. The article commences by clarifying what is meant by the term ‘internet economy’ before highlighting current impacts of the internet. It concludes by suggesting potential impacts for property and property uses in the future.
Resumo:
Chronic nursing shortages have placed increasing pressure on many nursing schools to recruit greater numbers of students with the consequence of larger class sizes. Larger class sizes have the potential to lead to student disengagement. This paper describes a case study that examined the strategies used by a group of nursing lecturers to engage students and to overcome passivity in a Bachelor of Nursing programme. A non-participant observer attended 20 tutorials to observe five academics deliver four tutorials each. Academics were interviewed both individually and as a group following the completion of all tutorial observations. All observations, field notes, interviews and focus groups were coded separately and major themes identified. From this analysis two broad categories emerged: getting students involved; and engagement as a struggle. Academics used a wide variety of techniques to interest and involve students. Additionally, academics desired an equal relationship with students. They believed that both they and the students had some power to influence the dynamics of tutorials and that neither party had ultimate power. The findings of this study serve to re-emphasise past literature which suggests that to engage students, the academics must also engage.
Resumo:
Studies of orthographic skills transfer between languages focus mostly on working memory (WM) ability in alphabetic first language (L1) speakers when learning another, often alphabetically congruent, language. We report two studies that, instead, explored the transferability of L1 orthographic processing skills in WM in logographic-L1 and alphabetic-L1 speakers. English-French bilingual and English monolingual (alphabetic-L1) speakers, and Chinese-English (logographic-L1) speakers, learned a set of artificial logographs and associated meanings (Study 1). The logographs were used in WM tasks with and without concurrent articulatory or visuo-spatial suppression. The logographic-L1 bilinguals were markedly less affected by articulatory suppression than alphabetic-L1 monolinguals (who did not differ from their bilingual peers). Bilinguals overall were less affected by spatial interference, reflecting superior phonological processing skills or, conceivably, greater executive control. A comparison of span sizes for meaningful and meaningless logographs (Study 2) replicated these findings. However, the logographic-L1 bilinguals’ spans in L1 were measurably greater than those of their alphabetic-L1 (bilingual and monolingual) peers; a finding unaccounted for by faster articulation rates or differences in general intelligence. The overall pattern of results suggests an advantage (possibly perceptual) for logographic-L1 speakers, over and above the bilingual advantage also seen elsewhere in third language (L3) acquisition.