469 resultados para Interdisciplinary Methods
Resumo:
In the expanding literature on creative practice research, art and design are often described as a unified field. They are bracketed together (art-and-design), referred to as interchangeable terms (art/design), and nested together, as if the practices of one domain encompass the other. However it is possible to establish substantial differences in research approaches. In this chapter we argue that core distinctions arise out of the goals of the research, intentions invested in the resulting “artefacts” (creative works, products, events), and the knowledge claims made for the research outcomes. Moreover, these fundamental differences give rise to a number of contingent attributes of the research such as the forming contexts, methodological approaches, and ways of evidencing and reporting new knowledge. We do not strictly ascribe these differences to disciplinary contexts. Rather, we use the terms effective practice research and evocative practice research to describe the spirit of the two distinctive research paradigms we identify. In short, effective practice research (often pursued in design fields) seeks a solution (or resolution) to a problem identified with a particular community, and it produces an artefact that addresses this problem by effecting change (making a situation, product or process more efficient or effective in some way). On the other hand, evocative practice research (often pursued by creative arts fields) is driven by individual pre-occupations, cultural concerns or human experience more broadly. It produces artefacts that evoke affect and resonance, and are poetically irreducible in meaning. We cite recent examples of creative research projects that illustrate the distinctions we identify. We then go on to describe projects that integrate these modes of research. In this way, we map out a creative research spectrum, with distinct poles as well as multiple hybrid possibilities. The hybrid projects we reference are not presented as evidence an undifferentiated field. Instead, we argue that they integrate research modes in deliberate, purposeful and distinctive ways: employing effective practice research methods in the production of evocative artefacts or harnessing evocative (as well as effective) research paradigms to effect change.
Resumo:
Porn studies researchers in the humanities have tended to use different research methods from those in social sciences. There has been surprisingly little conversation between the groups about methodology. This article presents a basic introduction to textual analysis and statistical analysis, aiming to provide for all porn studies researchers a familiarity with these two quite distinct traditions of data analysis. Comparing these two approaches, the article suggests that social science approaches are often strongly reliable – but can sacrifice validity to this end. Textual analysis is much less reliable, but has the capacity to be strongly valid. Statistical methods tend to produce a picture of human beings as groups, in terms of what they have in common, whereas humanities approaches often seek out uniqueness. Social science approaches have asked a more limited range of questions than have the humanities. The article ends with a call to mix up the kinds of research methods that are applied to various objects of study.
Resumo:
This paper examines a Doctoral journey of interdisciplinary exploration, explication, examination...and exasperation. In choosing to pursue a practice-led doctorate I had determined from the outset that ‘writing 100,000 words that only two people ever read’, was not something which interested me. Hence, the oft-asked question of ‘what kind of doctorate’ I was engaged in, consistently elicited the response, “a useful one”. In order to satisfy my own imperatives of authenticity and usefulness, my doctoral research had to clearly demonstrate relevance to; productively inform; engage with; and add value to: wider professional field(s) of practice; students in the university courses I teach; and the broader community - not just the academic community. Consequently, over the course of my research, the question, ‘But what makes it Doctoral?’ consistently resounded and resonated. Answering that question, to satisfy not only the traditionalists asking it but, perhaps surprisingly, some academic innovators - and more particularly, myself as researcher - revealed academic/political inconsistencies and issues which challenged both the fundamental assumptions and actuality of practice-led research. This paper examines some of those inconsistencies, issues and challenges and provides at least one possible answer to the question: ‘But what makes it Doctoral?’
Resumo:
One of the main objectives of law schools beyond educating students is to produce viable legal research. The comments in this paper are basically confined to the Australian context, and to examine this topic effectively, it is necessary to briefly review the current tertiary research agenda in Australia. This paper argues that there is a need for recognition and support for an expanded legal research framework along with additional research training for legal academics. There also needs to be more effective methods of measuring and recognising quality in legal research. This method needs to be one that can engender respect in an interdisciplinary context.
Resumo:
Interdisciplinary research is often funded by national government initiatives or large corporate sponsorship, and as such, demands periodic reporting on the use of those funds. For reasons of accountability, governance and communication to the tax payer, knowledge of the outcomes of the research need to be measured and understood. The interdisciplinary approach to research raises many challenges for impact reporting. This presentation will consider what are the best practice workflow models and methodologies.Novel methodologies that can be added to the usual metrics of academic publications include analysis of percentage share of total publications in a subject or keyword field, calculating most cited publication in a key phrase category, analysis of who has cited or reviewed the work, and benchmarking of this data against others in that same category. At QUT, interest in how collaborative networking is trending in a research theme has led to the creation of some useful co-authorship graphs that demonstrate the network positions of authors and the strength of their scientific collaborations within a group. The scale of international collaborations is also worth including in the assessment. However, despite all of the tools and techniques available, the most useful way a researcher can help themselves and the process is to set up and maintain their researcher identifier and profile.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
Aims: To compare different methods for identifying alcohol involvement in injury-related emergency department presentation in Queensland youth, and to explore the alcohol terminology used in triage text. Methods: Emergency Department Information System data were provided for patients aged 12-24 years with an injury-related diagnosis code for a 5 year period 2006-2010 presenting to a Queensland emergency department (N=348895). Three approaches were used to estimate alcohol involvement: 1) analysis of coded data, 2) mining of triage text, and 3) estimation using an adaptation of alcohol attributable fractions (AAF). Cases were identified as ‘alcohol-involved’ by code and text, as well as AAF weighted. Results: Around 6.4% of these injury presentations overall had some documentation of alcohol involvement, with higher proportions of alcohol involvement documented for 18-24 year olds, females, indigenous youth, where presentations occurred on a Saturday or Sunday, and where presentations occurred between midnight and 5am. The most common alcohol terms identified for all subgroups were generic alcohol terms (eg. ETOH or alcohol) with almost half of the cases where alcohol involvement was documented having a generic alcohol term recorded in the triage text. Conclusions: Emergency department data is a useful source of information for identification of high risk sub-groups to target intervention opportunities, though it is not a reliable source of data for incidence or trend estimation in its current unstandardised form. Improving the accuracy and consistency of identification, documenting and coding of alcohol-involvement at the point of data capture in the emergency department is the most desirable long term approach to produce a more solid evidence base to support policy and practice in this field.
Resumo:
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performace of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made.
Resumo:
This study reports an action research undertaken at Queensland University of Technology. It evaluates the effectiveness of the integration of GIS within the substantive domains of an existing land use planning course in 2011. Using student performance, learning experience survey, and questionnaire survey data, it also evaluates the impacts of incorporating hybrid instructional methods (e.g., in-class and online instructional videos) in 2012 and 2013. Results show that: students (re)iterated the importance of GIS in the course justifying the integration; the hybrid methods significantly increased student performance; and unlike replacement, the videos are more suitable as a complement to in-class activity.
Resumo:
Background & Aims Nutrition screening and assessment enable early identification of malnourished people and those at risk of malnutrition. Appropriate assessment tools assist with informing and monitoring nutrition interventions. Tool choice needs to be appropriate to the population and setting. Methods Community-dwelling people with Parkinson’s disease (>18 years) were recruited. Body mass index (BMI) was calculated from weight and height. Participants were classified as underweight according to World Health Organisation (WHO) (≤18.5kg/m2) and age specific (<65 years,≤18.5kg/m2; ≥65 years,≤23.5kg/m2) cut-offs. The Mini-Nutritional Assessment (MNA) screening (MNA-SF) and total assessment scores were calculated. The Patient-Generated Subjective Global Assessment (PG-SGA), including the Subjective Global Assessment (SGA), was performed. Sensitivity, specificity, positive predictive value, negative predictive value and weighted kappa statistic of each of the above compared to SGA were determined. Results Median age of the 125 participants was 70.0(35-92) years. Age-specific BMI (Sn 68.4%, Sp 84.0%) performed better than WHO (Sn 15.8%, Sp 99.1%) categories. MNA-SF performed better (Sn 94.7%, Sp 78.3%) than both BMI categorisations for screening purposes. MNA had higher specificity but lower sensitivity than PG-SGA (MNA Sn 84.2%, Sp 87.7%; PG-SGA Sn 100.0%, Sp 69.8%). Conclusions BMI lacks sensitivity to identify malnourished people with Parkinson’s disease and should be used with caution. The MNA-SF may be a better screening tool in people with Parkinson’s disease. The PG-SGA performed well and may assist with informing and monitoring nutrition interventions. Further research should be conducted to validate screening and assessment tools in Parkinson’s disease.
Resumo:
Regenerative medicine includes two efficient techniques, namely tissue-engineering and cell-based therapy in order to repair tissue damage efficiently. Most importantly, huge numbers of autologous cells are required to deal these practices. Nevertheless, primary cells, from autologous tissue, grow very slowly while culturing in vitro; moreover, they lose their natural characteristics over prolonged culturing period. Transforming growth factors-beta (TGF-β) is a ubiquitous protein found biologically in its latent form, which prevents it from eliciting a response until conversion to its active form. In active form, TGF-β acts as a proliferative agent in many cell lines of mesenchymal origin in vitro. This article reviews on some of the important activation methods-physiochemical, enzyme-mediated, non-specific protein interaction mediated, and drug-induced- of TGF-β, which may be established as exogenous factors to be used in culturing medium to obtain extensive proliferation of primary cells.
Resumo:
The safe working lifetime of a structure in a corrosive or other harsh environment is frequently not limited by the material itself but rather by the integrity of the coating material. Advanced surface coatings are usually crosslinked organic polymers such as epoxies and polyurethanes which must not shrink, crack or degrade when exposed to environmental extremes. While standard test methods for environmental durability of coatings have been devised, the tests are structured more towards determining the end of life rather than in anticipation of degradation. We have been developing prognostic tools to anticipate coating failure by using a fundamental understanding of their degradation behaviour which, depending on the polymer structure, is mediated through hydrolytic or oxidation processes. Fourier transform infrared spectroscopy (FTIR) is a widely-used laboratory technique for the analysis of polymer degradation and with the development of portable FTIR spectrometers, new opportunities have arisen to measure polymer degradation non-destructively in the field. For IR reflectance sampling, both diffuse (scattered) and specular (direct) reflections can occur. The complexity in these spectra has provided interesting opportunities to study surface chemical and physical changes during paint curing, service abrasion and weathering, but has often required the use of advanced statistical analysis methods such as chemometrics to discern these changes. Results from our studies using this and related techniques and the technical challenges that have arisen will be presented.
Resumo:
Modern lipidomics relies heavily on mass spectrometry for the structural characterization and quantification of lipids of biological origins. Structural information is gained by tandem mass spectrometry (MS/MS) whereby lipid ions are fragmented to elucidate lipid class, fatty acid chain length, and degree of unsaturation. Unfortunately, however, in most cases double bond position cannot be assigned based on MS/MS data alone and thus significant structural diversity is hidden from such analyses. For this reason, we have developed two online methods for determining double bond position within unsaturated lipids; ozone electrospray ionization mass spectrometry (OzESI-MS) and ozone-induced dissociation (OzID). Both techniques utilize ozone to cleave C-C double bonds that result in chemically induced fragment ions that locate the position(s) of unsaturation