453 resultados para Iterative Methods
Resumo:
Double-stranded RNA (dsRNA) induces an endogenous sequence-specific RNA degradation mechanism in most eukaryotic cells. The mechanism can be harnessed to silence genes in plants by expressing self-complementary single-stranded (hairpin) RNA in which the duplexed region has the same sequence as part of the target gene's mRNA. We describe a number of plasmid vectors for generating hairpin RNAs, including those designed for high-throughput cloning, and provide protocols for their use.
Resumo:
In plant cells, DICER-LIKE4 processes perfectly double-stranded RNA (dsRNA) into short interfering (si) RNAs, and DICER-LIKE1 generates micro (mi) RNAs from primary miRNA transcripts (pri-miRNA) that form fold-back structures of imperfectly dsRNA. Both si and miRNAs direct the endogenous endonuclease, ARGONAUTE1 to cleave complementary target single-stranded RNAs and either small RNA (sRNA)-directed pathway can be harnessed to silence genes in plants. A routine way of inducing and directing RNA silencing by siRNAs is to express self-complementary single-stranded hairpin RNA (hpRNA), in which the duplexed region has the same sequence as part of the target gene's mRNA. Artificial miRNA (amiRNA)-mediated silencing uses an endogenous pri-miRNA, in which the original miRNA/miRNA* sequence has been replaced with a sequence complementary to the new target gene. In this chapter, we describe the plasmid vector systems routinely used by our research group for the generation of either hpRNA-derived siRNAs or amiRNAs.
Resumo:
Porn studies researchers in the humanities have tended to use different research methods from those in social sciences. There has been surprisingly little conversation between the groups about methodology. This article presents a basic introduction to textual analysis and statistical analysis, aiming to provide for all porn studies researchers a familiarity with these two quite distinct traditions of data analysis. Comparing these two approaches, the article suggests that social science approaches are often strongly reliable – but can sacrifice validity to this end. Textual analysis is much less reliable, but has the capacity to be strongly valid. Statistical methods tend to produce a picture of human beings as groups, in terms of what they have in common, whereas humanities approaches often seek out uniqueness. Social science approaches have asked a more limited range of questions than have the humanities. The article ends with a call to mix up the kinds of research methods that are applied to various objects of study.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques.
Resumo:
Aims: To compare different methods for identifying alcohol involvement in injury-related emergency department presentation in Queensland youth, and to explore the alcohol terminology used in triage text. Methods: Emergency Department Information System data were provided for patients aged 12-24 years with an injury-related diagnosis code for a 5 year period 2006-2010 presenting to a Queensland emergency department (N=348895). Three approaches were used to estimate alcohol involvement: 1) analysis of coded data, 2) mining of triage text, and 3) estimation using an adaptation of alcohol attributable fractions (AAF). Cases were identified as ‘alcohol-involved’ by code and text, as well as AAF weighted. Results: Around 6.4% of these injury presentations overall had some documentation of alcohol involvement, with higher proportions of alcohol involvement documented for 18-24 year olds, females, indigenous youth, where presentations occurred on a Saturday or Sunday, and where presentations occurred between midnight and 5am. The most common alcohol terms identified for all subgroups were generic alcohol terms (eg. ETOH or alcohol) with almost half of the cases where alcohol involvement was documented having a generic alcohol term recorded in the triage text. Conclusions: Emergency department data is a useful source of information for identification of high risk sub-groups to target intervention opportunities, though it is not a reliable source of data for incidence or trend estimation in its current unstandardised form. Improving the accuracy and consistency of identification, documenting and coding of alcohol-involvement at the point of data capture in the emergency department is the most desirable long term approach to produce a more solid evidence base to support policy and practice in this field.
Resumo:
There is currently a wide range of research into the recent introduction of student response systems in higher education and tertiary settings (Banks 2006; Kay and Le Sange, 2009; Beatty and Gerace 2009; Lantz 2010; Sprague and Dahl 2009). However, most of this pedagogical literature has generated ‘how to’ approaches regarding the use of ‘clickers’, keypads, and similar response technologies. There are currently no systematic reviews on the effectiveness of ‘GoSoapBox’ – a more recent, and increasingly popular student response system – for its capacity to enhance critical thinking, and achieve sustained learning outcomes. With rapid developments in teaching and learning technologies across all undergraduate disciplines, there is a need to obtain comprehensive, evidence-based advice on these types of technologies, their uses, and overall efficacy. This paper addresses this current gap in knowledge. Our teaching team, in an undergraduate Sociology and Public Health unit at the Queensland University of Technology (QUT), introduced GoSoapBox as a mechanism for discussing controversial topics, such as sexuality, gender, economics, religion, and politics during lectures, and to take opinion polls on social and cultural issues affecting human health. We also used this new teaching technology to allow students to interact with each other during class – both on both social and academic topics – and to generate discussions and debates during lectures. The paper reports on a data-driven study into how this interactive online tool worked to improve engagement and the quality of academic work produced by students. This paper will firstly, cover the recent literature reviewing student response systems in tertiary settings. Secondly, it will outline the theoretical framework used to generate this pedagogical research. In keeping with the social and collaborative features of Web 2.0 technologies, Bandura’s Social Learning Theory (SLT) will be applied here to investigate the effectiveness of GoSoapBox as an online tool for improving learning experiences and the quality of academic output by students. Bandura has emphasised the Internet as a tool for ‘self-controlled learning’ (Bandura 2001), as it provides the education sector with an opportunity to reconceptualise the relationship between learning and thinking (Glassman & Kang 2011). Thirdly, we describe the methods used to implement the use of GoSoapBox in our lectures and tutorials, and which aspects of the technology we drew on for learning purposes, as well as the methods for obtaining feedback from the students about the effectiveness or otherwise of this tool. Fourthly, we report cover findings from an examination of all student/staff activity on GoSoapBox as well as reports from students about the benefits and limitations of it as a learning aid. We then display a theoretical model that is produced via an iterative analytical process between SLT and our data analysis for use by academics and teachers across the undergraduate curriculum. The model has implications for all teachers considering the use of student response systems to improve the learning experiences of their students. Finally, we consider some of the negative aspects of GoSoapBox as a learning aid.
Resumo:
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performace of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made.
Resumo:
This study reports an action research undertaken at Queensland University of Technology. It evaluates the effectiveness of the integration of GIS within the substantive domains of an existing land use planning course in 2011. Using student performance, learning experience survey, and questionnaire survey data, it also evaluates the impacts of incorporating hybrid instructional methods (e.g., in-class and online instructional videos) in 2012 and 2013. Results show that: students (re)iterated the importance of GIS in the course justifying the integration; the hybrid methods significantly increased student performance; and unlike replacement, the videos are more suitable as a complement to in-class activity.
Resumo:
Background & Aims Nutrition screening and assessment enable early identification of malnourished people and those at risk of malnutrition. Appropriate assessment tools assist with informing and monitoring nutrition interventions. Tool choice needs to be appropriate to the population and setting. Methods Community-dwelling people with Parkinson’s disease (>18 years) were recruited. Body mass index (BMI) was calculated from weight and height. Participants were classified as underweight according to World Health Organisation (WHO) (≤18.5kg/m2) and age specific (<65 years,≤18.5kg/m2; ≥65 years,≤23.5kg/m2) cut-offs. The Mini-Nutritional Assessment (MNA) screening (MNA-SF) and total assessment scores were calculated. The Patient-Generated Subjective Global Assessment (PG-SGA), including the Subjective Global Assessment (SGA), was performed. Sensitivity, specificity, positive predictive value, negative predictive value and weighted kappa statistic of each of the above compared to SGA were determined. Results Median age of the 125 participants was 70.0(35-92) years. Age-specific BMI (Sn 68.4%, Sp 84.0%) performed better than WHO (Sn 15.8%, Sp 99.1%) categories. MNA-SF performed better (Sn 94.7%, Sp 78.3%) than both BMI categorisations for screening purposes. MNA had higher specificity but lower sensitivity than PG-SGA (MNA Sn 84.2%, Sp 87.7%; PG-SGA Sn 100.0%, Sp 69.8%). Conclusions BMI lacks sensitivity to identify malnourished people with Parkinson’s disease and should be used with caution. The MNA-SF may be a better screening tool in people with Parkinson’s disease. The PG-SGA performed well and may assist with informing and monitoring nutrition interventions. Further research should be conducted to validate screening and assessment tools in Parkinson’s disease.
Resumo:
Regenerative medicine includes two efficient techniques, namely tissue-engineering and cell-based therapy in order to repair tissue damage efficiently. Most importantly, huge numbers of autologous cells are required to deal these practices. Nevertheless, primary cells, from autologous tissue, grow very slowly while culturing in vitro; moreover, they lose their natural characteristics over prolonged culturing period. Transforming growth factors-beta (TGF-β) is a ubiquitous protein found biologically in its latent form, which prevents it from eliciting a response until conversion to its active form. In active form, TGF-β acts as a proliferative agent in many cell lines of mesenchymal origin in vitro. This article reviews on some of the important activation methods-physiochemical, enzyme-mediated, non-specific protein interaction mediated, and drug-induced- of TGF-β, which may be established as exogenous factors to be used in culturing medium to obtain extensive proliferation of primary cells.
Resumo:
Background: This paper describes research conducted with Big hART, Australia's most awarded participatory arts company. It considers three projects, LUCKY, GOLD and NGAPARTJI NGAPARTJI across separate sites in Tasmania, Western NSW and Northern Territory, respectively, in order to understand project impact from the perspective of project participants, Arts workers, community members and funders. Methods: Semi-structured interviews were conducted with 29 respondents. The data were coded thematically and analysed using the constant comparative method of qualitative data analysis. Results: Seven broad domains of change were identified: psychosocial health; community; agency and behavioural change; the Art; economic effect; learning and identity. Conclusions: Experiences of participatory arts are interrelated in an ecology of practice that is iterative, relational, developmental, temporal and contextually bound. This means that questions of impact are contingent, and there is no one path that participants travel or single measure that can adequately capture the richness and diversity of experience. Consequently, it is the productive tensions between the domains of change that are important and the way they are animated through Arts practice that provides sign posts towards the impact of Big hART projects.
Resumo:
The safe working lifetime of a structure in a corrosive or other harsh environment is frequently not limited by the material itself but rather by the integrity of the coating material. Advanced surface coatings are usually crosslinked organic polymers such as epoxies and polyurethanes which must not shrink, crack or degrade when exposed to environmental extremes. While standard test methods for environmental durability of coatings have been devised, the tests are structured more towards determining the end of life rather than in anticipation of degradation. We have been developing prognostic tools to anticipate coating failure by using a fundamental understanding of their degradation behaviour which, depending on the polymer structure, is mediated through hydrolytic or oxidation processes. Fourier transform infrared spectroscopy (FTIR) is a widely-used laboratory technique for the analysis of polymer degradation and with the development of portable FTIR spectrometers, new opportunities have arisen to measure polymer degradation non-destructively in the field. For IR reflectance sampling, both diffuse (scattered) and specular (direct) reflections can occur. The complexity in these spectra has provided interesting opportunities to study surface chemical and physical changes during paint curing, service abrasion and weathering, but has often required the use of advanced statistical analysis methods such as chemometrics to discern these changes. Results from our studies using this and related techniques and the technical challenges that have arisen will be presented.