462 resultados para Social investigation


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Patients with chest discomfort or other symptoms suggestive of acute coronary syndrome (ACS) are one of the most common categories seen in many Emergency Departments (EDs). While the recognition of patients at high-risk of ACS has improved steadily, identifying the majority of chest pain presentations who fall into the low-risk group remains a challenge. Research in this area needs to be transparent, robust, applicable to all hospitals from large tertiary centres to rural and remote sites, and to allow direct comparison between different studies with minimum patient spectrum bias. A standardised approach to the research framework using a common language for data definitions must be adopted to achieve this. The aim was to create a common framework for a standardised data definitions set that would allow maximum value when extrapolating research findings both within Australasian ED practice, and across similar populations worldwide. Therefore a comprehensive data definitions set for the investigation of non-traumatic chest pain patients with possible ACS was developed, specifically for use in the ED setting. This standardised data definitions set will facilitate ‘knowledge translation’ by allowing extrapolation of useful findings into the real-life practice of emergency medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the future development of interaction design as it applies to learning and training scenarios is crucial to effective development of curriculum and appropriate application of social and mobile communication technologies. As Attewell & Saville-Smith have recognised (2004), the use of mobile communication devices for improved literacy and numeracy is a desirable prospect among young people who represent the average age of undergraduate students. Further, with the growing penetration of broadband internet access, the ubiquity of wireless access in educational locations, the rise of ultra-mobile portable computers and the proliferation of social software applications in educational contexts, there are a growing number of channels for facilitation of learning. Nevertheless, there has been insufficient consideration of the interaction design issues that affect the effective facilitation of such learning. This paper contends that there is a clear need to design mobile and social learning to accommodate the benefits of these diverse channels for interaction. Additionally, there is a need to implement suitable testing processes to ensure participants in mobile and social learning are contributing effectively and maximising their learning. Through the presentation of case studies in mobile and social learning, the paper attempts to demonstrate how considered interaction design techniques can improve the effectiveness of new learning channels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This review is in response to Thinking Collaboratively about Peer-Review Process in Journal Article Publication by Kevin K.Kumashiro. Several authors critique and analyse the reflections of Kevin K. Kumashiro on challenges to publishing ant-oppressive research in educational journals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the opportunities for transformational learning experienced by a group of pre-service teachers who were engaged in service-learning as a pedagogical process with a focus on reflection. Critical social theory informed the design of the reflection process as it enabled a move away from knowledge transmission toward knowledge transformation. The structured reflection log was designed to illustrate the critical social theory expectations of quality learning that teach students to think critically: ideology critique and utopian critique. Butin's lenses and a reflection framework informed by the work of Bain, Ballantyne, Mills and Lester were used in the design of the service-learning reflection log. Reported data provide evidence of transformational learning and highlight how the students critique their world and imagine how they could contribute to a better world in their work as a beginning teacher.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature identifies several models that describe inter-phase mass transfer, key to the emission process. While the emission process is complex and these models may be more or less successful at predicting mass transfer rates, they identify three key variables for a system involving a liquid and an air phase in contact with it: • A concentration (or partial pressure) gradient driving force; • The fluid dynamic characteristics within the liquid and air phases, and • The chemical properties of the individual components within the system. In three applied research projects conducted prior to this study, samples collected with two well-known sampling devices resulted in very different odour emission rates. It was not possible to adequately explain the differences observed. It appeared likely, however, that the sample collection device might have artefact effects on the emission of odorants, i.e. the sampling device appeared to have altered the mass transfer process. This raised the obvious question: Where two different emission rates are reported for a single source (differing only in the selection of sampling device), and a credible explanation for the difference in emission rate cannot be provided, which emission rate is correct? This research project aimed to identify the factors that determine odour emission rates, the impact that the characteristics of a sampling device may exert on the key mass transfer variables, and ultimately, the impact of the sampling device on the emission rate itself. To meet these objectives, a series of targeted reviews, and laboratory and field investigations, were conducted. Two widely-used, representative devices were chosen to investigate the influence of various parameters on the emission process. These investigations provided insight into the odour emission process generally, and the influence of the sampling device specifically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a feminist reflexive approach this paper reports on interviews with single mother’s in the Brisbane area about their experiences with food shopping and household food security. Preliminary findings suggest that most experience significant stress around the amount of money they have available for food. As the price of food and other costs of living increase, the only budget item that is flexible – groceries - is squeezed tighter. All women expressed a reluctance to ask for help from strangers at agencies instead relying on the support of family and friends to keep them food secure. Sometimes family and friends had no spare resources to help or were not aware of the extent their friend or relative might be struggling. The increased risks of poverty and food insecurity mean many go without as feeding the children takes precedence. The quality of their diets is variable with many reporting on aiming for quantity rather than being concerned with nutritional balance. Exhaustion and stress from being over-committed doing three roles, mother, father and housekeeper was self-identified as a key factor leading to mental health conditions such as depression, burnout and break down. Female single parent households are vulnerable to reducing welfare benefits as children grow or child support changes. Current policy forces single parents out to work but many can only manage part-time work for lower wages and are barely able to cope with this extra burden often resenting the reduction in benefits it brings. Public perceptions, derision and the notions of choice surrounding single parenting leave the cohort divided and silent for fear of reprisals. In my investigation issues arise about welfare policy that keep benefits low and workplace patriarchal power that can contribute to systemic poverty and the widening of the gender gap in poverty. So far analysis suggests a better support system around community food security including some hands on home help services, nutritional information, cooking classes, community gardening and other social capital building activities are needed for these women in order to avoid long-term health problems and help them better care for the next generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Approach with Vertical Guidance (APV) is an instrument approach procedure which provides horizontal and vertical guidance to a pilot on approach to landing in reduced visibility conditions. APV approaches can greatly reduce the safety risk to general aviation by improving the pilot’s situational awareness. In particular the incidence of Controlled Flight Into Terrain (CFIT) which has occurred in a number of fatal air crashes in general aviation over the past decade in Australia, can be reduced. APV approaches can also improve general aviation operations. If implemented at Australian airports, APV approach procedures are expected to bring a cost saving of millions of dollars to the economy due to fewer missed approaches, diversions and an increased safety benefit. The provision of accurate horizontal and vertical guidance is achievable using the Global Positioning System (GPS). Because aviation is a safety of life application, an aviation-certified GPS receiver must have integrity monitoring or augmentation to ensure that its navigation solution can be trusted. However, the difficulty with the current GPS satellite constellation alone meeting APV integrity requirements, the susceptibility of GPS to jamming or interference and the potential shortcomings of proposed augmentation solutions for Australia such as the Ground-based Regional Augmentation System (GRAS) justifies the investigation of Aircraft Based Augmentation Systems (ABAS) as an alternative integrity solution for general aviation. ABAS augments GPS with other sensors at the aircraft to help it meet the integrity requirements. Typical ABAS designs assume high quality inertial sensors to provide an accurate reference trajectory for Kalman filters. Unfortunately high-quality inertial sensors are too expensive for general aviation. In contrast to these approaches the purpose of this research is to investigate fusing GPS with lower-cost Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU) and a mathematical model of aircraft dynamics, referred to as an Aircraft Dynamic Model (ADM) in this thesis. Using a model of aircraft dynamics in navigation systems has been studied before in the available literature and shown to be useful particularly for aiding inertial coasting or attitude determination. In contrast to these applications, this thesis investigates its use in ABAS. This thesis presents an ABAS architecture concept which makes use of a MEMS IMU and ADM, named the General Aviation GPS Integrity System (GAGIS) for convenience. GAGIS includes a GPS, MEMS IMU, ADM, a bank of Extended Kalman Filters (EKF) and uses the Normalized Solution Separation (NSS) method for fault detection. The GPS, IMU and ADM information is fused together in a tightly-coupled configuration, with frequent GPS updates applied to correct the IMU and ADM. The use of both IMU and ADM allows for a number of different possible configurations. Three are investigated in this thesis; a GPS-IMU EKF, a GPS-ADM EKF and a GPS-IMU-ADM EKF. The integrity monitoring performance of the GPS-IMU EKF, GPS-ADM EKF and GPS-IMU-ADM EKF architectures are compared against each other and against a stand-alone GPS architecture in a series of computer simulation tests of an APV approach. Typical GPS, IMU, ADM and environmental errors are simulated. The simulation results show the GPS integrity monitoring performance achievable by augmenting GPS with an ADM and low-cost IMU for a general aviation aircraft on an APV approach. A contribution to research is made in determining whether a low-cost IMU or ADM can provide improved integrity monitoring performance over stand-alone GPS. It is found that a reduction of approximately 50% in protection levels is possible using the GPS-IMU EKF or GPS-ADM EKF as well as faster detection of a slowly growing ramp fault on a GPS pseudorange measurement. A second contribution is made in determining how augmenting GPS with an ADM compares to using a low-cost IMU. By comparing the results for the GPS-ADM EKF against the GPS-IMU EKF it is found that protection levels for the GPS-ADM EKF were only approximately 2% higher. This indicates that the GPS-ADM EKF may potentially replace the GPS-IMU EKF for integrity monitoring should the IMU ever fail. In this way the ADM may contribute to the navigation system robustness and redundancy. To investigate this further, a third contribution is made in determining whether or not the ADM can function as an IMU replacement to improve navigation system redundancy by investigating the case of three IMU accelerometers failing. It is found that the failed IMU measurements may be supplemented by the ADM and adequate integrity monitoring performance achieved. Besides treating the IMU and ADM separately as in the GPS-IMU EKF and GPS-ADM EKF, a fourth contribution is made in investigating the possibility of fusing the IMU and ADM information together to achieve greater performance than either alone. This is investigated using the GPS-IMU-ADM EKF. It is found that the GPS-IMU-ADM EKF can achieve protection levels approximately 3% lower in the horizontal and 6% lower in the vertical than a GPS-IMU EKF. However this small improvement may not justify the complexity of fusing the IMU with an ADM in practical systems. Affordable ABAS in general aviation may enhance existing GPS-only fault detection solutions or help overcome any outages in augmentation systems such as the Ground-based Regional Augmentation System (GRAS). Countries such as Australia which currently do not have an augmentation solution for general aviation could especially benefit from the economic savings and safety benefits of satellite navigation-based APV approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Altered mechanical properties of the heel pad have been implicated in the development of plantar heel pain. However, the in vivo properties of the heel pad during gait remain largely unexplored in this cohort. The aim of the current study was to characterise the bulk compressive properties of the heel pad in individuals with and without plantar heel pain while walking. ---------- Methods: The sagittal thickness and axial compressive strain of the heel pad were estimated in vivo from dynamic lateral foot radiographs acquired from nine subjects with unilateral plantar heel pain and an equivalent number of matched controls, while walking at their preferred speed. Compressive stress was derived from simultaneously acquired plantar pressure data. Principal viscoelastic parameters of the heel pad, including peak strain, secant modulus and energy dissipation (hysteresis), were estimated from subsequent stress–strain curves.---------- Findings: There was no significant difference in loaded and unloaded heel pad thickness, peak stress, peak strain, or secant and tangent modulus in subjects with and without heel pain. However, the fat pad of symptomatic feet had a significantly lower energy dissipation ratio (0.55 ± 0.17 vs. 0.69 ± 0.08) when compared to asymptomatic feet (P < .05).---------- Interpretation: Plantar heel pain is characterised by reduced energy dissipation ratio of the heel pad when measured in vivo and under physiologically relevant strain rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The homeless have been subject to considerable scrutiny, historically and within current social, political and public discourse. The aetiology of homelessness has been the focus of a large body of economic, sociological, historical and political investigation. Importantly, efforts to conceptualise, explain and measure, the phenomenon of homelessness and homeless people has occurred largely within the context of defining “the problem of the homeless” and the generation of solutions to the ‘problem’. There has been little consideration of how and why homelessness has come to be seen, or understood, as a problem, or how this can change across time and/or place. This alternative stream of research has focused on tracing and analysing the relationship between how people experiencing homeless have become a matter of government concern and the manner in which homelessness itself has been problematised. With this in mind this study has analysed the discourses - political, social and economic rationalities and knowledges - which have provided the conditions of possibility for the identification of the homeless and homelessness as a problem needing to be governed and the means for translating these discourses into the applied domain. The aim of this thesis has been to contribute to current knowledge by developing a genealogy of the conditions and rationalities that have underpinned the problematisation of homelessness and the homeless. The outcome of this analysis has been to open up the opportunity to consider alternative governmental possibilities arising from the exposure of the way in which contemporary problematisation and responses have been influenced by the past. An understanding of this process creates an ability to appreciate the intended and unintended consequences for the future direction of public policy and contemporary research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a need in industry for a commodity polyethylene film with controllable degradation properties that will degrade in an environmentally neutral way, for applications such as shopping bags and packaging film. Additives such as starch have been shown to accelerate the degradation of plastic films, however control of degradation is required so that the film will retain its mechanical properties during storage and use, and then degrade when no longer required. By the addition of a photocatalyst it is hoped that polymer film will breakdown with exposure to sunlight. Furthermore, it is desired that the polymer film will degrade in the dark, after a short initial exposure to sunlight. Research has been undertaken into the photo- and thermo-oxidative degradation processes of 25 ìm thick LLDPE (linear low density polyethylene) film containing titania from different manufacturers. Films were aged in a suntest or in an oven at 50 °C, and the oxidation product formation was followed using IR spectroscopy. Degussa P25, Kronos 1002, and various organic-modified and doped titanias of the types Satchleben Hombitan and Hunstsman Tioxide incorporated into LLDPE films were assessed for photoactivity. Degussa P25 was found to be the most photoactive with UVA and UVC exposure. Surface modification of titania was found to reduce photoactivity. Crystal phase is thought to be among the most important factors when assessing the photoactivity of titania as a photocatalyst for degradation. Pre-irradiation with UVA or UVC for 24 hours of the film containing 3% Degussa P25 titania prior to aging in an oven resulted in embrittlement in ca. 200 days. The multivariate data analysis technique PCA (principal component analysis) was used as an exploratory tool to investigate the IR spectral data. Oxidation products formed in similar relative concentrations across all samples, confirming that titania was catalysing the oxidation of the LLDPE film without changing the oxidation pathway. PCA was also employed to compare rates of degradation in different films. PCA enabled the discovery of water vapour trapped inside cavities formed by oxidation by titania particles. Imaging ATR/FTIR spectroscopy with high lateral resolution was used in a novel experiment to examine the heterogeneous nature of oxidation of a model polymer compound caused by the presence of titania particles. A model polymer containing Degussa P25 titania was solvent cast onto the internal reflection element of the imaging ATR/FTIR and the oxidation under UVC was examined over time. Sensitisation of 5 ìm domains by titania resulted in areas of relatively high oxidation product concentration. The suitability of transmission IR with a synchrotron light source to the study of polymer film oxidation was assessed as the Australian Synchrotron in Melbourne, Australia. Challenges such as interference fringes and poor signal-to-noise ratio need to be addressed before this can become a routine technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic: ''In less than ten years music labels will not exist anymore.'' Michael Smelli, former Global COO Sony/BMG MCA/QUT IMP Business Lab Digital Music Think Thanks 9 May 2009, Brisbane Big music labels such as EMI, Sony BMG and UMG have been responsible for promoting and producing a myriad of stars in the music industry over the last decades. However, the industry structure is under enormous threat with the emergence of a new innovative era of digital music. Recent years have seen a dramatic shift in industry power with the emergence of Napster and other file sharing sites, iTunes and other online stores, iPod and the MP3 revolution. Myspace.com and other social networking sites are connecting entrepreneurial artists with fans and creating online music communities independent of music labels. In 2008 the digital music business internationally grew by around 25% to 3.7 Billion US-Dollar. Digital platforms now account for around 20% of recorded music sales, up from 15 % in 2007 (IFPI Digital music report 2009). CD sales have fallen by 40% since their peak levels. Global digital music sales totalled an estimated US$ 3 Billion in 2007, an increase of 40% on 2006 figures. Digital sales account for an estimated 15% of global market, up from 11% in 2006 and zero in 2003. The music industry is more advanced in terms of digital revenues than any other creative or entertainment industry (except games). Its digital share is more than twice that of newspapers (7%), films (35) or books (2%). All these shifts present new possibilities for music entrepreneurs to act entrepreneurially and promote their music independently of the major music labels. Diffusion of innovations has a long tradition in both sociology (e.g. Rogers 1962, 2003) and marketing (Bass 1969, Mahajan et al., 1990). The context of the current project is theoretically interesting in two respects. First, the role of online social networks replaces traditional face-to-face word of mouth communications. Second, as music is a hedonistic product, this strongly influences the nature of interpersonal communications and their diffusion patterns. Both of these have received very little attention in the diffusion literature to date, and no studies have investigated the influence of both simultaneously. This research project is concerned with the role of social networks in this new music industry landscape, and how this may be leveraged by musicians willing to act entrepreneurially. Our key research question we intend to address is: How do online social network communities impact the nature, pattern and speed that music diffuses? Methodology/Key Propositions : We expect the nature/ character of diffusion of popular, generic music genres to be different from specialized, niche music. To date, only Moe & Fader (2002) and Lee et al. (2003) investigated diffusion patterns of music and these focus on forecast weekly sales of music CDs based on the advance purchase orders before the launch, rather than taking a detailed look at diffusion patterns. Consequently, our first research questions are concerned with understanding the nature of online communications within the context of diffusion of music and artists. Hence, we have the following research questions: RQ1: What is the nature of fan-to-fan ''word of mouth'' online communications for music? Do these vary by type of artist and genre of music? RQ2: What is the nature of artist-to-fan online communications for music? Do these vary by type of artist and genre of music? What types of communication are effective? Two outcomes from research social network theory are particularly relevant to understanding how music might diffuse through social networks. Weak tie theory (Granovetter, 1973), argues that casual or infrequent contacts within a social network (or weak ties) act as a link to unique information which is not normally contained within an entrepreneurs inner circle (or strong tie) social network. A related argument, structural hole theory (Burt, 1992), posits that it is the absence of direct links (or structural holes) between members of a social network which offers similar informational benefits. Although these two theories argue for the information benefits of casual linkages, and diversity within a social network, others acknowledge that a balanced network which consists of a mix of strong ties, weak ties is perhaps more important overall (Uzzi, 1996). It is anticipated that the network structure of the fan base for different types of artists and genres of music will vary considerably. This leads to our third research question: RQ3: How does the network structure of online social network communities impact the pattern and speed that music diffuses? The current paper is best described as theory elaboration. It will report the first exploratory phase designed to develop and elaborate relevant theory (the second phase will be a quantitative study of network structure and diffusion). We intend to develop specific research propositions or hypotheses from the above research questions. To do so we will conduct three focus group discussions of independent musicians and three focus group discussions of fans active in online music communication on social network sites. We will also conduct five case studies of bands that have successfully built fan bases through social networking sites (e.g. myspace.com, facebook.com). The idea is to identify which communication channels they employ and the characteristics of the fan interactions for different genres of music. We intend to conduct interviews with each of the artists and analyse their online interaction with their fans. Results and Implications : At the current stage, we have just begun to conduct focus group discussions. An analysis of the themes from these focus groups will enable us to further refine our research questions into testable hypotheses. Ultimately, our research will provide a better understanding of how social networks promote the diffusion of music, and how this varies for different genres of music. Hence, some music entrepreneurs will be able to promote their music more effectively. The results may be further generalised to other industries where online peer-to-peer communication is common, such as other forms of entertainment and consumer technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The connection between social reform and urban management is evident throughout the history of the city. This article maps out how ideas about social reform and social housing were established historically, during the development of the nineteenth century city. The second part examines contemporary shifts in thinking about homelessness through a case study of two Brisbane City Council initiatives in Brisbane: The Homeless Shelter Trial and Footprints along Kurilpa.