566 resultados para Autobiography as Topic
Resumo:
This paper reports on a study which explored the views and attitudes of family members towards the sexual expression of residents with dementia in residential aged care facilities in two states in Australia. Recruitment was challenging and only seven family members agreed to an interview on this topic. Data were analysed using a constant comparative method. Family were generally supportive of residents’ rights to sexual expression, but only some types of behaviours were approved of. There was an acknowledgement that responding to residents’ sexuality was difficult for staff and many families believed that they should be kept informed of their relative’s sexual behaviours and moreover be involved in decision making about it. Findings suggest the need for family education and a larger study to better understand the views and motivations of family carers and how these might impact on the sexual expression of the older person with dementia living in residential aged care.
Resumo:
Auto/biographical documentaries ask audiences to take a ‘leap of faith’, not being able to offer any real ‘proof’ of the people and events they claim to document, other than that of the film-maker’s saying this is what happened. With only memory and history seen through the distorting lens of time, ‘the authenticity of experience functions as a receding horizon of truth in which memory and testimony are articulated as modes of salvage’. Orchids: My Intersex Adventure follows a salvaging of the film-maker’s life events and experiences, being born with an intersex condition, and, via the filming and editing process, revolving around the core question: who am I? From this transformative creative documentary practice evolves a new way of embodying experience and ‘seeing’, playfully dubbed here as the ‘intersex gaze’.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Satirical poem on social media, literary reviews and memory
Resumo:
Stigmergy is a biological term used when discussing a sub-set of insect swarm-behaviour describing the apparent organisation seen during their activities. Stigmergy describes a communication mechanism based on environment-mediated signals which trigger responses among the insects. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, where the pheromones are a form of environment-mediated communication. What is interesting with this phenomenon is that highly organized societies are achieved without an apparent management structure. Stigmergy is also observed in human environments, both natural and engineered. It is implicit in the Web where sites provide a virtual environment supporting coordinative contributions. Researchers in varying disciplines appreciate the power of this phenomenon and have studied how to exploit it. As stigmergy becomes more widely researched we see its definition mutate as papers citing original work become referenced themselves. Each paper interprets these works in ways very specific to the research being conducted. Our own research aims to better understand what improves the collaborative function of a Web site when exploiting the phenomenon. However when researching stigmergy to develop our understanding we discover a lack of a standardized and abstract model for the phenomenon. Papers frequently cited the same generic descriptions before becoming intimately focused on formal specifications of an algorithm, or esoteric discussions regarding sub-facets of the topic. None provide a holistic and macro-level view to model and standardize the nomenclature. This paper provides a content analysis of influential literature documenting the numerous theoretical and experimental papers that have focused on stigmergy. We establish that stigmergy is a phenomenon that transcends the insect world and is more than just a metaphor when applied to the human world. We present from our own research our general theory and abstract model of semantics of stigma in stigmergy. We hope our model will clarify the nuances of the phenomenon into a useful road-map, and standardise vocabulary that we witness becoming confused and divergent. Furthermore, this paper documents the analysis on which we base our next paper: Special Theory of Stigmergy: A Design Pattern for Web 2.0 Collaboration.
Resumo:
Travel time estimation and prediction on motorways has long been a topic of research. Prediction modeling generally assumes that the estimation is perfect. No matter how good is the prediction modeling- the errors in estimation can significantly deteriorate the accuracy and reliability of the prediction. Models have been proposed to estimate travel time from loop detector data. Generally, detectors are closely spaced (say 500m) and travel time can be estimated accurately. However, detectors are not always perfect, and even during normal running conditions few detectors malfunction, resulting in increase in the spacing between the functional detectors. Under such conditions, error in the travel time estimation is significantly large and generally unacceptable. This research evaluates the in-practice travel time estimation model during different traffic conditions. It is observed that the existing models fail to accurately estimate travel time during large detector spacing and congestion shoulder periods. Addressing this issue, an innovative Hybrid model that only considers loop data for travel time estimation is proposed. The model is tested using simulation and is validated with real Bluetooth data from Pacific Motorway Brisbane. Results indicate that during non free flow conditions and larger detector spacing Hybrid model provides significant improvement in the accuracy of travel time estimation.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
Contact lenses are a successful and popular means to correct refractive error and are worn by just under 700,000 Australians1 and approximately 125 million people worldwide. The most serious complication of contact lens wear is microbial keratitis, a potentially sight-threatening corneal infection most often caused by bacteria. Gram-negative bacteria, in particular pseudomonas species, account for the majority of severe bacterial infections. Pathogens such as fungi or amoebae, which feature less often, are associated with significant morbidity. These unusual pathogens have come into the spotlight in recent times with an apparent association with specific lens cleaning solutions...
Resumo:
How to live sustainably is a topic of local, national and international importance. The Australian National Curriculum (ACARA, 2011) identifies sustainability as a cross-disciplinary strand, obligating teachers to build sustainability into their pedagogical practices. In early childhood education, the Early Years Learning Framework (2009) and more recently, the National Quality Framework (2011) provide impetus for early childhood education for sustainably (ECEfS). This article discusses ECEfS, but first, it addresses climate change putting this into a sustainability perspective.
Resumo:
With the ever-increasing emphasis on ocular disease recognition in the practice of optometry and especially anterior eye disease management and therapeutics, any book addressing such issues is bound to have a captive audience. This second edition of Anterior Eye Disease and Therapeutics A–Z provides a succinct yet comprehensive coverage of this topic.
Resumo:
Intrinsic or acquired resistance to chemotherapeutic agents is a common phenomenon and a major challenge in the treatment of cancer patients. Chemoresistance is defined by a complex network of factors including multi-drug resistance proteins, reduced cellular uptake of the drug, enhanced DNA repair, intracellular drug inactivation, and evasion of apoptosis. Pre-clinical models have demonstrated that many chemotherapy drugs, such as platinum-based agents, antracyclines, and taxanes, promote the activation of the NF-κB pathway. NF-κB is a key transcription factor, playing a role in the development and progression of cancer and chemoresistance through the activation of a multitude of mediators including anti-apoptotic genes. Consequently, NF-κB has emerged as a promising anti-cancer target. Here, we describe the role of NF-κB in cancer and in the development of resistance, particularly cisplatin. Additionally, the potential benefits and disadvantages of targeting NF-κB signaling by pharmacological intervention will be addressed.
Resumo:
Genomic instability underlies the transformation of host cells toward malignancy, promotes development of invasion and metastasis and shapes the response of established cancer to treatment. In this review, we discuss recent advances in our understanding of genomic stability in squamous cell carcinoma of the head and neck (HNSCC), with an emphasis on DNA repair pathways. HNSCC is characterized by distinct profiles in genome stability between similarly staged cancers that are reflected in risk, treatment response and outcomes. Defective DNA repair generates chromosomal derangement that can cause subsequent alterations in gene expression, and is a hallmark of progression toward carcinoma. Variable functionality of an increasing spectrum of repair gene polymorphisms is associated with increased cancer risk, while aetiological factors such as human papillomavirus, tobacco and alcohol induce significantly different behaviour in induced malignancy, underpinned by differences in genomic stability. Targeted inhibition of signalling receptors has proven to be a clinically-validated therapy, and protein expression of other DNA repair and signalling molecules associated with cancer behaviour could potentially provide a more refined clinical model for prognosis and treatment prediction. Development and expansion of current genomic stability models is furthering our understanding of HNSCC pathophysiology and uncovering new, promising treatment strategies. © 2013 Glenn Jenkins et al.
Resumo:
Invited Presentation on my book Architecture for a Free Subjectivity. In March of 1982, Skyline, the Institute for Architecture and Urban Studies serial, published the landmark interview between Paul Rabinow, an American anthropologist, and Michel Foucault, which would only appear two years later under the title “Space, Knowledge, and Power,” in Rabinow’s edited book The Foucault Reader. Foucault said that in the spatialization of knowledge and power beginning in the 18th century, architecture is not a signifier or metaphor for power, it is rather the “technique for practising social organization.” The role of the IAUS in the architectural dissemination of Foucault’s ideas on the subject and space in the North American academy – such as the concept “heterotopia,” and Foucault’s writing on surveillance and Jeremy Bentham’s Panopticon, subsequently analysed by Georges Teyssot, who was teaching at the Venice School – is well known. Teyssot’s work is part of the historical canalization of Foucauldianism, and French subjectivity more broadly, along its dizzying path, via Italy, to American architecture schools, where it solidified in the 1980s paradigm that would come to be known as American architecture theory. Foucault was already writing on incarceration and prisons, from the 1970s. (In the 1975 lectures he said “architecture was responsible for the invention of madness.”) But this work was not properly incorporated into architectural discussion until the early ’80s. What is not immediately apparent, what this history suggests to me is that subjectivity was not a marginal topic within “theory”, but was perhaps a platform and entry point for architecture theory. One of the ideas that I’m working on is that “theory” can be viewed, historically, as the making of architectural subjectivity, something that can be traced back to the Frankfurt School critique which begins with the modern subject...
Resumo:
In this paper I use the case study of Darren, derived from two interviews in a research study of racism in the city of Stoke, UK (Gadd, Dixon and Jefferson 2005; Gadd and Dixon 2011), to explore how best to approach the topic of hate-motivated violence. This entails discussing the relationships among racism (the original object of study), hate-motivated violence (the more general term) and prejudices of various sorts. Because that discussion, I argue, justifies a psychoanalytic starting point, and since violence has become, almost quintessentially, masculine, this leads on to an exploration of what can be learnt from psychoanalysis about the relations among sexuality, masculinity, hatred and violence. This involves brief discussions of some key psychoanalytic terms, but only what is needed to enable sense to be made of my chosen case, which I shall then interrogate using these psychoanalytic ideas, focused on understanding the origins and nature of Darren’s hatred.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.