944 resultados para Simple methods
Resumo:
Objective: To evaluate responses to self-administered brief questions regarding consumption of vegetables and fruit by comparison with blood levels of serum carotenoids and red-cell folate. Design: A cross-sectional study in which participants reported their usual intake of fruit and vegetables in servings per day, and serum levels of five carotenoids (α-carotene, β-carotene, β-cryptoxanthin, lutein/zeaxanthin and lycopene) and red-cell folate were measured. Serum carotenoid levels were determined by high-performance liquid chromatography, and red-cell folate by an automated immunoassay system. Settings and subjects: Between October and December 2000, a sample of 1598 adults aged 25 years and over, from six randomly selected urban centres in Queensland, Australia, were examined as part of a national study conducted to determine the prevalence of diabetes and associated cardiovascular risk factors. Results: Statistically significant (P<0.01) associations with vegetable and fruit intake (categorised into groups: ≤1 serving, 2–3 servings and ≥4 servings per day) were observed for α-carotene, β-carotene, β-cryptoxanthin, lutein/zeaxanthin and red-cell folate. The mean level of these carotenoids and of red-cell folate increased with increasing frequency of reported servings of vegetables and fruit, both before and after adjusting for potential confounding factors. A significant association with lycopene was observed only for vegetable intake before adjusting for confounders. Conclusions: These data indicate that brief questions may be a simple and valuable tool for monitoring vegetable and fruit intake in this population.
Resumo:
Double-stranded RNA (dsRNA) induces an endogenous sequence-specific RNA degradation mechanism in most eukaryotic cells. The mechanism can be harnessed to silence genes in plants by expressing self-complementary single-stranded (hairpin) RNA in which the duplexed region has the same sequence as part of the target gene's mRNA. We describe a number of plasmid vectors for generating hairpin RNAs, including those designed for high-throughput cloning, and provide protocols for their use.
Resumo:
In plant cells, DICER-LIKE4 processes perfectly double-stranded RNA (dsRNA) into short interfering (si) RNAs, and DICER-LIKE1 generates micro (mi) RNAs from primary miRNA transcripts (pri-miRNA) that form fold-back structures of imperfectly dsRNA. Both si and miRNAs direct the endogenous endonuclease, ARGONAUTE1 to cleave complementary target single-stranded RNAs and either small RNA (sRNA)-directed pathway can be harnessed to silence genes in plants. A routine way of inducing and directing RNA silencing by siRNAs is to express self-complementary single-stranded hairpin RNA (hpRNA), in which the duplexed region has the same sequence as part of the target gene's mRNA. Artificial miRNA (amiRNA)-mediated silencing uses an endogenous pri-miRNA, in which the original miRNA/miRNA* sequence has been replaced with a sequence complementary to the new target gene. In this chapter, we describe the plasmid vector systems routinely used by our research group for the generation of either hpRNA-derived siRNAs or amiRNAs.
Resumo:
Plants transformed with Agrobacterium frequently contain T-DNA concatamers with direct-repeat (d/r) or inverted-repeat (i/r) transgene integrations, and these repetitive T-DNA insertions are often associated with transgene silencing. To facilitate the selection of transgenic lines with simple T-DNA insertions, we constructed a binary vector (pSIV) based on the principle of hairpin RNA (hpRNA)-induced gene silencing. The vector is designed so that any transformed cells that contain more than one insertion per locus should generate hpRNA against the selective marker gene, leading to its silencing. These cells should, therefore, be sensitive to the selective agent and less likely to regenerate. Results from Arabidopsis and tobacco transformation showed that pSIV gave considerably fewer transgenic lines with repetitive insertions than did a conventional T-DNA vector (pCON). Furthermore, the transgene was more stably expressed in the pSIV plants than in the pCON plants. Rescue of plant DNA flanking sequences from pSIV plants was significantly more frequent than from pCON plants, suggesting that pSIV is potentially useful for T-DNA tagging. Our results revealed a perfect correlation between the presence of tail-to-tail inverted repeats and transgene silencing, supporting the view that read-through hpRNA transcript derived from i/r T-DNA insertions is a primary inducer of transgene silencing in plants. © CSIRO 2005.
Resumo:
A very simple leaf assay is described that rapidly and reliably identifies transgenic plants expressing the hygromycin resistance gene, hph or the phosphinothricin resistance gene, bar. Leaf tips were cut from plants propagated either in the glasshouse or in tissue culture and the cut surface embedded in solid medium containing the appropriate selective agent. Non-transgenic barley or rice leaf tips had noticeable symptoms of either bleaching or necrosis after three days on the medium and were completely bleached or necrotic after one week. Transgenic leaf tips remained green and healthy over this period. This gave unambiguous discrimination between transgenic and non-transgenic plants. The leaf assay was also effective for dicot plants tested (tobacco and peas).
Resumo:
Porn studies researchers in the humanities have tended to use different research methods from those in social sciences. There has been surprisingly little conversation between the groups about methodology. This article presents a basic introduction to textual analysis and statistical analysis, aiming to provide for all porn studies researchers a familiarity with these two quite distinct traditions of data analysis. Comparing these two approaches, the article suggests that social science approaches are often strongly reliable – but can sacrifice validity to this end. Textual analysis is much less reliable, but has the capacity to be strongly valid. Statistical methods tend to produce a picture of human beings as groups, in terms of what they have in common, whereas humanities approaches often seek out uniqueness. Social science approaches have asked a more limited range of questions than have the humanities. The article ends with a call to mix up the kinds of research methods that are applied to various objects of study.
Resumo:
Suspension bridges meet the steadily growing demand for lighter and longer bridges in today’s infrastructure systems. These bridges are designed to have long life spans, but with age, their main cables and hangers could suffer from corrosion and fatigue. There is a need for a simple and reliable procedure to detect and locate such damage, so that appropriate retrofitting can be carried out to prevent bridge failure. Damage in a structure causes changes in its properties (mass, damping and stiffness) which in turn will cause changes in its vibration characteristics (natural frequencies, modal damping and mode shapes). Methods based on modal flexibility, which depends on both the natural frequencies and mode shapes, have the potential for damage detection. They have been applied successfully to beam and plate elements, trusses and simple structures in reinforced concrete and steel. However very limited applications for damage detection in suspension bridges have been identified to date. This paper examines the potential of modal flexibility methods for damage detection and localization of a suspension bridge under different damage scenarios in the main cables and hangers using numerical simulation techniques. Validated finite element model (FEM) of a suspension bridge is used to acquire mass normalized mode shape vectors and natural frequencies at intact and damaged states. Damage scenarios will be simulated in the validated FE models by varying stiffness of the damaged structural members. The capability of damage index based on modal flexibility to detect and locate damage is evaluated. Results confirm that modal flexibility based methods have the ability to successfully identify damage in suspension bridge main cables and hangers.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
On the advanced analysis of steel frames allowing for flexural, local and lateral-torsional buckling
Resumo:
Detailed procedure for second-order analysis has been coded in the newest Eurocode 3 and the Hong Kong steel code (2005). The effective length method has been noted to be inapplicable to analysis of shallow domes of imperfect members exhibiting snap-through buckling, to portals with leaning columns and others. On the other hand, the advanced analysis is not limited to buckling design of these structures. This paper demonstrates its application to the design of a simple plane sway portal and a three diminsional non-sway steel building. The results by the advanced analysis and the first-order linear analysis are compared and the technique for practical second-order analysis steel structures is described. It is observed that the use of a straight element by itself cannot model the buckling resistance of columns governed by different buckling curves for hot-rolled and cold-formed sections of various shapes like I, H, hollow etc. Also the curvature of the conventional cubic Hermite element is not varied by the external axial force and thus it cannot simulate the response of a buckling column. Thus its use for second-order analysis is basically unacceptable. A technique for additional checking of beams undergoing lateral-torsional buckling is also suggested making the advanced analysis a complete design tool for conventional steel frames.
Resumo:
Aims: To compare different methods for identifying alcohol involvement in injury-related emergency department presentation in Queensland youth, and to explore the alcohol terminology used in triage text. Methods: Emergency Department Information System data were provided for patients aged 12-24 years with an injury-related diagnosis code for a 5 year period 2006-2010 presenting to a Queensland emergency department (N=348895). Three approaches were used to estimate alcohol involvement: 1) analysis of coded data, 2) mining of triage text, and 3) estimation using an adaptation of alcohol attributable fractions (AAF). Cases were identified as ‘alcohol-involved’ by code and text, as well as AAF weighted. Results: Around 6.4% of these injury presentations overall had some documentation of alcohol involvement, with higher proportions of alcohol involvement documented for 18-24 year olds, females, indigenous youth, where presentations occurred on a Saturday or Sunday, and where presentations occurred between midnight and 5am. The most common alcohol terms identified for all subgroups were generic alcohol terms (eg. ETOH or alcohol) with almost half of the cases where alcohol involvement was documented having a generic alcohol term recorded in the triage text. Conclusions: Emergency department data is a useful source of information for identification of high risk sub-groups to target intervention opportunities, though it is not a reliable source of data for incidence or trend estimation in its current unstandardised form. Improving the accuracy and consistency of identification, documenting and coding of alcohol-involvement at the point of data capture in the emergency department is the most desirable long term approach to produce a more solid evidence base to support policy and practice in this field.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performace of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made.
Resumo:
Background Physical symptoms are common in pregnancy and are predominantly associated with normal physiological changes. These symptoms have a social and economic cost, leading to absenteeism from work and additional medical interventions. There is currently no simple method for identifying common pregnancy related problems in the antenatal period. A validated tool, for use by pregnancy care providers would be useful. The aim of this study was to develop and validate a Pregnancy Symptoms Inventory for use by health professionals. Methods A list of symptoms was generated via expert consultation with health professionals. Focus groups were conducted with pregnant women. The inventory was tested for face validity and piloted for readability and comprehension. For test-re-test reliability, the tool was administered to the same women 2 to 3 days apart. Finally, midwives trialled the inventory for 1 month and rated its usefulness on a 10cm visual analogue scale (VAS). Results A 41-item Likert inventory assessing how often symptoms occurred and what effect they had, was developed. Individual item test re-test reliability was between .51 to 1, the majority (34 items) scoring ≥0.70. The top four “often” reported symptoms were urinary frequency (52.2%), tiredness (45.5%), poor sleep (27.5%) and back pain (19.5%). Among the women surveyed, 16.2% claimed to sometimes or often be incontinent. Referrals to the incontinence nurse increased > 8 fold during the study period. Conclusions The PSI provides a comprehensive inventory of pregnancy related symptoms, with a mechanism for assessing their effect on function. It was robustly developed, with good test re-test reliability, face validity, comprehension and readability. This provides a validated tool for assessing the impact of interventions in pregnancy.
Resumo:
Purpose To examine choroidal thickness (ChT) and its topographical variation across the posterior pole in myopic and non-myopic children. Methods One hundred and four children aged 10-15 years of age (mean age 13.1 ± 1.4 years) had ChT measured using enhanced depth imaging optical coherence tomography (OCT). Forty one children were myopic (mean spherical equivalent -2.4 ± 1.5 D) and 63 non-myopic (mean +0.3 ± 0.3 D). Two series of 6 radial OCT line scans centred on the fovea were assessed for each child. Subfoveal ChT and ChT across a series of parafoveal zones over the central 6mm of the posterior pole were determined through manual image segmentation. Results Subfoveal ChT was significantly thinner in myopes (mean 303 ± 79 µm) compared to non-myopes (mean 359 ± 77 µm) (p<0.0001). Multiple regression analysis revealed both refractive error (r = 0.39, p<0.001) and age (r = 0.21, p = 0.02) were positively associated with subfoveal ChT. ChT also exhibited significant topographical variations, with the choroid being thicker in more central regions. The thinnest choroid was typically observed in nasal (mean 286 ± 77 µm) and inferior-nasal (306 ± 79 µm) locations, and the thickest in superior (346 ± 79 µm) and superior-temporal (341 ± 74 µm) locations. The difference in ChT between myopic and non-myopic children was significantly greater in central foveal regions compared to more peripheral regions (>3 mm diameter) (p<0.001). Conclusions Myopic children have significantly thinner choroids compared to non-myopic children of similar age, particularly in central foveal regions. The magnitude of difference in choroidal thickness associated with myopia appears greater than would be predicted by a simple passive choroidal thinning with axial elongation.