967 resultados para Kleisli Category
Resumo:
Cultural issues have become an increasingly important consideration in healthcare. Such cultural issues, however, are underresearched in Australia, especially in palliative care. This study has sought to address this gap, exploring the social construction of cultural issues in palliative care by oncology nurses. A grounded theory approach was used. Semistructured interviews with 7 Australian oncology nurses provided the data for the study. The core category emerging from the study was that of accommodating cultural needs whereby to meet patients' specific cultural requirements, nurses actively found ways to accommodate the needs of patients and their families. This process often included compromise and negotiation whereby limits were set. In addition, the use of cross-cultural communication strategies emerged from the data as an important feature of cultural care. A series of subcategories were also identified as factors that could influence the process by which nurses would accommodate cultural needs.
Resumo:
Principal Topic: For forward thinking companies, the environment may represent the ''biggest opportunity for enterprise and invention the industrial world has ever seen'' (Cairncross 1990). Increasing awareness of environmental and sustainability issues through media including the promotion of Al Gore's ''An Inconvenient Truth'' has seen increased awareness of environmental and sustainability issues and increased demand for business processes that reduce detrimental environmental impacts of global development (Dean & McMullen 2007). The increased demand for more environmentally sensitive products and services represents an opportunity for the development of ventures that seek to satisfy this demand through entrepreneurial action. As a consequence, increasing recent market developments in renewable energy, carbon emissions, fuel cells, green building, and other sectors suggest an increasing importance of opportunities for environmental entrepreneurship (Dean and McMullen 2007) and increasingly important area of business activity (Schaper 2005). In the last decade in particular, big business has sought to develop a more ''sustainability/ green friendly'' orientation as a response to public pressure and increased government legislation and policy to improve environmental performance (Cohen and Winn 2007). Whilst much of the literature and media is littered with examples of sustainability practices of large firms, nascent and young sustainability firms have only recently begun generating strong research and policy interest (Shepherd, Kuskova and Patzelt 2009): not only for their potential to generate above average financial performance and returns owing to a greater popularity and demand towards sustainability products and services offerings, but also for their intent to lessen environmental impacts, and to provide a more accurate reflection of the ''true cost'' of market offerings taking into account carbon and environmental impacts. More specifically, researchers have suggested that although the previous focus has been on large firms and their impact on the environment, the estimated collective impact of entries and exits of nascent and young firms in development is substantial and could outweigh the combined environmental impact of large companies (Hillary, 2000). Therefore, it may be argued that greater attention should be paid to nascent and young firms and researching sustainability practices, for both their impact in reducing environmental impacts and potential higher financial performance. Whilst acknowledging this research only uses the first wave of a four year longitudinal study of nascent and young firms, it can still begin to provide initial analysis on which to continue further research. The aim of this paper therefore is to provide an overview of the emerging literature in sustainable entrepreneurship and to present some selected preliminary results from the first wave of the data collection, with comparison, where appropriate, of sustainable and firms that do not fulfil this criteria. ''One of the key challenges in evaluating sustainability entrepreneurship is the lack of agreement in how it is defined'' (Schaper, 2005: 10). Some evaluate sustainable entrepreneurs simply as one category of entrepreneurs with little difference between them and traditional entrepreneurs (Dees, 1998). Other research recognises values-based sustainable enterprises requiring a unique perspective (Parrish, 2005). Some see the environmental or sustainable entrepreneurship is a subset of social entrepreneurship (Cohen & Winn, 2007; Dean & McMullen, 2007) whilst others see it as a separate, distinct theory (Archer 2009). Following one of the first definitions of sustainability developed by the Brundtland Commission (1987) we define sustainable entrepreneurship as firms which ''seek to meet the needs and aspirations of the present without compromising the ability to meet those of the future''. ---------- Methodology/Key Propositions: In this exploratory paper we investigate sustainable entrepreneurship using Cohen et al.'s (2008) framework to identify strategies of nascent and young entrepreneurial firms. We use data from The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). This study shares the general empirical approach with PSED studies in the US (Reynolds et al 1994; Reynolds & Curtin 2008). The overall study uses samples of 727 nascent (not yet operational) firms and another 674 young firms, the latter being in an operational stage but less than four years old. To generate the sub sample of sustainability firms, we used content analysis techniques on firm titles, descriptions and product descriptions provided by respondents. Two independent coders used a predefined codebook developed from our review of the sustainability entrepreneurship literature (Cohen et al. 2009) to evaluate the content based on terms such as ''sustainable'' ''eco-friendly'' ''renewable energy'' ''environment'' amongst others. The inter-rater reliability was checked and the Kappa's co-efficient was found to be within the acceptable range (0.746). 85 firms fulfilled the criteria given for inclusion in the sustainability cohort. ---------- Results and Implications: The results for this paper are based on Wave one of the CAUSEE survey which has been completed and the data is available for analysis. It is expected that the findings will assist in beginning to develop an understanding of nascent and young firms that are driven to contribute to a society which is sustainable, not just from an economic perspective (Cohen et al 2008), but from an environmental and social perspective as well. The CAUSEE study provides an opportunity to compare the characteristics of sustainability entrepreneurs with entrepreneurial firms without a stated environmental purpose, which constitutes the majority of the new firms created each year, using a large scale novel longitudinal dataset. The results have implications for Government in the design of better conditions for the creation of new business, firms who assist sustainability in developing better advice programs in line with a better understanding of their needs and requirements, individuals who may be considering becoming entrepreneurs in high potential arenas and existing entrepreneurs make better decisions.
Resumo:
Working with 12 journalism students plus a research assistant, producer/director Romano conducted five community focus groups and discussions with 80 people on the street. These provided the themes and concepts and the creative approaches for each program. Each was structured around one of the emergent themes; all programs offered different voices rather than coming to a single conclusion. New Horizons, New Homes aired over three weeks n Radio 4EB and was entered into the 2005 UN Media Peace Award where it won the Best Radio Category ahead of ABC and SBS. The UN commended the way in which the programs brought together a wide base of research to create a better understanding in the community on this issue. This project did not just improve the accuracy and social inclusiveness of reporting. It applied principles of deliberative democracy in the creation of journalism that enhances citizens’ deliberative potential on complex social issues
Resumo:
The management of congenital talipes equinovarus (TEV) has received much clinical and research attention within the disciplines of medicine and physiotherapy. However, few articles have been published about the role of the registered nurse in contributing to the optimum health and wellbeing of the child and family presenting for assessment and treatment of the condition. Much of the most intense treatment for TEV occurs in the first few weeks of the child’s life; a time of critical growth and development when the infant is both sensitive and vulnerable to the environment within which it is nurtured. This is also a crucial time for developing a secure attachment to the caregiver and nurses can assist parents in optimising their infant’s opportunities for a secure attachment relationship. This paper thus provides an overview of the medical and physiotherapy management of TEV and highlights the role nurses have in providing nursing care and psycho-social support to parents of infants with TEV, in areas such as the maintenance of skin integrity and circulation, providing effective pain relief, and optimising growth, development, and a secure attachment relationship. Congenital TEV or 'club foot' is one of the most common congenital orthopaedic anomalies of infants. In Queensland in 2000, in approximately 50,000 births, 244 infants were born with talipes; almost five infants per 1000 births. National statistics are not as specific, with coding providing only 'other lower limb' as the category that would encompass talipes; the 1997 national incidence of such lower limb congenital malformations is reported as 1.7 per 10,000 births 1.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
In an environment where it has become increasingly difficult to attract consumer attention, marketers have begun to explore alternative forms of marketing communication. One such form that has emerged is product placement, which has more recently appeared in electronic games. Given changes in media consumption and the growth of the games industry, it is not surprising that games are being exploited as a medium for promotional content. Other market developments are also facilitating and encouraging their use, in terms of both the insertion of brand messages into video games and the creation of brand-centred environments, labelled ‘advergames’. However, while there is much speculation concerning the beneficial outcomes for marketers, there remains a lack of academic work in this area and little empirical evidence of the actual effects of this form of promotion on game players. Only a handful of studies are evident in the literature, which have explored the influence of game placements on consumers. The majority have studied their effect on brand awareness, largely demonstrating that players can recall placed brands. Further, most research conducted to date has focused on computer and online games, but consoles represent the dominant platform for play (Taub, 2004). Finally, advergames have largely been neglected, particularly those in a console format. Widening the gap in the literature is the fact that insufficient academic attention has been given to product placement as a marketing communication strategy overall, and to games in general. The unique nature of the strategy also makes it difficult to apply existing literature to this context. To address a significant need for information in both the academic and business domains, the current research investigates the effects of brand and product placements in video games and advergames on consumer attitude to the brand and corporate image. It was conducted in two stages. Stage one represents a pilot study. It explored the effects of use simulated and peripheral placements in video games on players’ and observers’ attitudinal responses, and whether these are influenced by involvement with a product category or skill level in the game. The ability of gamers to recall placed brands was also examined. A laboratory experiment was employed with a small sample of sixty adult subjects drawn from an Australian east-coast university, some of who were exposed to a console video game on a television set. The major finding of study one is that placements in a video game have no effect on gamers’ attitudes, but they are recalled. For stage two of the research, a field experiment was conducted with a large, random sample of 350 student respondents to investigate the effects on players of brand and product placements in handheld video games and advergames. The constructs of brand attitude and corporate image were again tested, along with several potential confounds. Consistent with the pilot, the results demonstrate that product placement in electronic games has no effect on players’ brand attitudes or corporate image, even when allowing for their involvement with the product category, skill level in the game, or skill level in relation to the medium. Age and gender also have no impact. However, the more interactive a player perceives the game to be, the higher their attitude to the placed brand and corporate image of the brand manufacturer. In other words, when controlling for perceived interactivity, players experienced more favourable attitudes, but the effect was so weak it probably lacks practical significance. It is suggested that this result can be explained by the existence of excitation transfer, rather than any processing of placed brands. The current research provides strong, empirical evidence that brand and product placements in games do not produce strong attitudinal responses. It appears that the nature of the game medium, game playing experience and product placement impose constraints on gamer motivation, opportunity and ability to process these messages, thereby precluding their impact on attitude to the brand and corporate image. Since this is the first study to investigate the ability of video game and advergame placements to facilitate these deeper consumer responses, further research across different contexts is warranted. Nevertheless, the findings have important theoretical and managerial implications. This investigation makes a number of valuable contributions. First, it is relevant to current marketing practice and presents findings that can help guide promotional strategy decisions. It also presents a comprehensive review of the games industry and associated activities in the marketplace, relevant for marketing practitioners. Theoretically, it contributes new knowledge concerning product placement, including how it should be defined, its classification within the existing communications framework, its dimensions and effects. This is extended to include brand-centred entertainment. The thesis also presents the most comprehensive analysis available in the literature of how placements appear in games. In the consumer behaviour discipline, the research builds on theory concerning attitude formation, through application of MacInnis and Jaworski’s (1989) Integrative Attitude Formation Model. With regards to the games literature, the thesis provides a structured framework for the comparison of games with different media types; it advances understanding of the game medium, its characteristics and the game playing experience; and provides insight into console and handheld games specifically, as well as interactive environments generally. This study is the first to test the effects of interactivity in a game environment, and presents a modified scale that can be used as part of future research. Methodologically, it addresses the limitations of prior research through execution of a field experiment and observation with a large sample, making this the largest study of product placement in games available in the literature. Finally, the current thesis offers comprehensive recommendations that will provide structure and direction for future study in this important field.
Resumo:
Two perceptions of the marginality of home economics are widespread across educational and other contexts. One is that home economics and those who engage in its pedagogy are inevitably marginalised within patriarchal relations in education and culture. This is because home economics is characterised as women's knowledge, for the private domain of the home. The other perception is that only orthodox epistemological frameworks of inquiry should be used to interrogate this state of affairs. These perceptions have prompted leading theorists in the field to call for non-essentialist approaches to research in order to re-think the thinking that has produced this cul-de-sac positioning of home economics as a body of knowledge and a site of teacher practice. This thesis takes up the challenge of working to locate a space outside the frame of modernist research theory and methods, recognising that this shift in epistemology is necessary to unsettle the idea that home economics is inevitably marginalised. The purpose of the study is to reconfigure how we have come to think about home economics teachers and the profession of home economics as a site of cultural practice, in order to think it otherwise (Lather, 1991). This is done by exploring how the culture of home economics is being contested from within. To do so, the thesis uses a 'posthumanist' approach, which rejects the conception of the individual as a unitary and fixed entity, but instead as a subject in process, shaped by desires and language which are not necessarily consciously determined. This posthumanist project focuses attention on pedagogical body subjects as the 'unsaid' of home economics research. It works to transcend the modernist dualism of mind/body, and other binaries central to modernist work, including private/public, male/female,paid/unpaid, and valued/unvalued. In so doing, it refuses the simple margin/centre geometry so characteristic of current perceptions of home economics itself. Three studies make up this work. Studies one and two serve to document the disciplined body of home economics knowledge, the governance of which works towards normalisation of the 'proper' home economics teacher. The analysis of these accounts of home economics teachers by home economics teachers, reveals that home economics teachers are 'skilled' yet they 'suffer' for their profession. Further,home economics knowledge is seen to be complicit in reinforcing the traditional roles of masculinity and femininity, thereby reinforcing heterosexual normativity which is central to patriarchal society. The third study looks to four 'atypical'subjects who defy the category of 'proper' and 'normal' home economics teacher. These 'atypical' bodies are 'skilled' but fiercely reject the label of 'suffering'. The discussion of the studies is a feminist poststructural account, using Russo's (1994) notion of the grotesque body, which is emergent from Bakhtin's (1968) theory of the carnivalesque. It draws on the 'shreds' of home economics pedagogy,scrutinising them for their subversive, transformative potential. In this analysis, the giving and taking of pleasure and fun in the home economics classroom presents moments of surprise and of carnival. Foucault's notion of the construction of the ethical individual shows these 'atypical' bodies to be 'immoderate' yet striving hard to be 'continent' body subjects. This research captures moments of transgression which suggest that transformative moments are already embodied in the pedagogical practices of home economics teachers, and these can be 'seen' when re-looking through postmodemist lenses. Hence, the cultural practices ofhome economics as inevitably marginalised are being contested from within. Until now, home economics as a lived culture has failed to recognise possibilities for reconstructing its own field beyond the confines of modernity. This research is an example of how to think about home economics teachers and the profession as a reconfigured cultural practice. Future research about home economics as a body of knowledge and a site of teacher practice need not retell a simple story of oppression. Using postmodemist epistemologies is one way to provide opportunities for new ways of looking.
Resumo:
This thesis is a problematisation of the teaching of art to young children. To problematise a domain of social endeavour, is, in Michel Foucault's terms, to ask how we come to believe that "something ... can and must be thought" (Foucault, 1985:7). The aim is to document what counts (i.e., what is sayable, thinkable, feelable) as proper art teaching in Queensland at this point ofhistorical time. In this sense, the thesis is a departure from more recognisable research on 'more effective' teaching, including critical studies of art teaching and early childhood teaching. It treats 'good teaching' as an effect of moral training made possible through disciplinary discourses organised around certain epistemic rules at a particular place and time. There are four key tasks accomplished within the thesis. The first is to describe an event which is not easily resolved by means of orthodox theories or explanations, either liberal-humanist or critical ones. The second is to indicate how poststructuralist understandings of the self and social practice enable fresh engagements with uneasy pedagogical moments. What follows this discussion is the documentation of an empirical investigation that was made into texts generated by early childhood teachers, artists and parents about what constitutes 'good practice' in art teaching. Twenty-two participants produced text to tell and re-tell the meaning of 'proper' art education, from different subject positions. Rather than attempting to capture 'typical' representations of art education in the early years, a pool of 'exemplary' teachers, artists and parents were chosen, using "purposeful sampling", and from this pool, three videos were filmed and later discussed by the audience of participants. The fourth aspect of the thesis involves developing a means of analysing these texts in such a way as to allow a 're-description' of the field of art teaching by attempting to foreground the epistemic rules through which such teacher-generated texts come to count as true ie, as propriety in art pedagogy. This analysis drew on Donna Haraway's (1995) understanding of 'ironic' categorisation to hold the tensions within the propositions inside the categories of analysis rather than setting these up as discursive oppositions. The analysis is therefore ironic in the sense that Richard Rorty (1989) understands the term to apply to social scientific research. Three 'ironic' categories were argued to inform the discursive construction of 'proper' art teaching. It is argued that a teacher should (a) Teach without teaching; (b) Manufacture the natural; and (c) Train for creativity. These ironic categories work to undo modernist assumptions about theory/practice gaps and finding a 'balance' between oppositional binary terms. They were produced through a discourse theoretical reading of the texts generated by the participants in the study, texts that these same individuals use as a means of discipline and self-training as they work to teach properly. In arguing the usefulness of such approaches to empirical data analysis, the thesis challenges early childhood research in arts education, in relation to its capacity to deal with ambiguity and to acknowledge contradiction in the work of teachers and in their explanations for what they do. It works as a challenge at a range of levels - at the level of theorising, of method and of analysis. In opening up thinking about normalised categories, and questioning traditional Western philosophy and the grand narratives of early childhood art pedagogy, it makes a space for re-thinking art pedagogy as "a game oftruth and error" (Foucault, 1985). In doing so, it opens up a space for thinking how art education might be otherwise.
Resumo:
Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.
Resumo:
The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.
Resumo:
Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.
Resumo:
Many randomised controlled trials (RCT) have been conducted using Piper methysticum (kava), however no qualitative research exploring the experience of taking kava during a clinical trial has previously been reported. ---------- Patients and methods: A qualitative research component (in the form of semi structured and open ended written questions) was incorporated into an RCT to explore the experiences of those participating in a clinical trial of kava. The written questions were provided to participants at weeks 2 and 3 (after randomisation, after each controlled phase). The researcher and participants were blinded as to whether they were taking kava or placebo. Two open ended questions were posed to elucidate their experiences from taking either kava or placebo. Thematic analysis was undertaken and researcher triangulation employed to ensure analytical rigour. Key themes after the kava phases were a reduction in anxiety and stress, and calming or relaxing mental effects. Other themes related to improvement in sleep and in somatic anxiety symptoms. ---------- Results: Kava use did not cause any serious adverse reactions although a few respondents reported nausea or other gastrointestinal side effects. This represents the first documented qualitative investigation of the experience of taking kava during a clinical trial. The primary themes involved anxiolytic and calming effects, with only a minor theme reflecting side effects. Our exploratory qualitative data was consistent with the significant quantitative results revealed in the study and provides additional support to suggest the trial results did not exclude any important positive or negative effects (at least as experienced by the trial participants).
Resumo:
Objective: The aim of the present study was to investigate whether parent report of family resilience predicted children’s disaster-induced post-traumatic stress disorder (PTSD) and general emotional symptoms, independent of a broad range of variables including event-related factors, previous child mental illness and social connectedness. ---------- Methods: A total of 568 children (mean age = 10.2 years, SD = 1.3) who attended public primary schools, were screened 3 months after Cyclone Larry devastated the Innisfail region of North Queensland. Measures included parent report on the Family Resilience Measure and Strengths and Difficulties Questionnaire (SDQ)–emotional subscale and child report on the PTSD Reaction Index, measures of event exposure and social connectedness. ---------- Results: Sixty-four students (11.3%) were in the severe–very severe PTSD category and 53 families (28.6%) scored in the poor family resilience range. A lower family resilience score was associated with child emotional problems on the SDQ and longer duration of previous child mental health difficulties, but not disaster-induced child PTSD or child threat perception on either bivariate analysis, or as a main or moderator variable on multivariate analysis (main effect: adjusted odds ratio (ORadj) = 0.57, 95% confidence interval (CI) = 0.13–2.44). Similarly, previous mental illness was not a significant predictor of child PTSD in the multivariate model (ORadj = 0.75, 95%CI = 0.16–3.61). ---------- Conclusion: In this post-disaster sample children with existing mental health problems and those of low-resilience families were not at elevated risk of PTSD. The possibility that the aetiological model of disaster-induced child PTSD may differ from usual child and adolescent conceptualizations is discussed.
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').