923 resultados para Measuring methods
Resumo:
Varying the spatial distribution of applied nitrogen (N) fertilizer to match demand in crops has been shown to increase profits in Australia. Better matching the timing of N inputs to plant requirements has been shown to improve nitrogen use efficiency and crop yields and could reduce nitrous oxide emissions from broad acre grains. Farmers in the wheat production area of south eastern Australia are increasingly splitting N application with the second timing applied at stem elongation (Zadoks 30). Spectral indices have shown the ability to detect crop canopy N status but a robust method using a consistent calibration that functions across seasons has been lacking. One spectral index, the canopy chlorophyll content index (CCCI) designed to detect canopy N using three wavebands along the "red edge" of the spectrum was combined with the canopy nitrogen index (CNI), which was developed to normalize for crop biomass and correct for the N dilution effect of crop canopies. The CCCI-CNI index approach was applied to a 3-year study to develop a single calibration derived from a wheat crop sown in research plots near Horsham, Victoria, Australia. The index was able to predict canopy N (g m-2) from Zadoks 14-37 with an r2 of 0.97 and RMSE of 0.65 g N m-2 when dry weight biomass by area was also considered. We suggest that measures of N estimated from remote methods use N per unit area as the metric and that reference directly to canopy %N is not an appropriate method for estimating plant concentration without first accounting for the N dilution effect. This approach provides a link to crop development rather than creating a purely numerical relationship. The sole biophysical input, biomass, is challenging to quantify robustly via spectral methods. Combining remote sensing with crop modelling could provide a robust method for estimating biomass and therefore a method to estimate canopy N remotely. Future research will explore this and the use of active and passive sensor technologies for use in precision farming for targeted N management.
Resumo:
Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].
Resumo:
OBJECTIVES Based on self-reported measures, sedentary time has been associated with chronic disease and mortality. This study examined the validity of the wrist-worn GENEactiv accelerometer for measuring sedentary time (i.e. sitting and lying) by posture classification, during waking hours in free living adults. DESIGN Fifty-seven participants (age=18-55 years 52% male) were recruited using convenience sampling from a large metropolitan Australian university. METHODS Participants wore a GENEActiv accelerometer on their non-dominant wrist and an activPAL device attached to their right thigh for 24-h (00:00 to 23:59:59). Pearson's Correlation Coefficient was used to examine the convergent validity of the GENEActiv and the activPAL for estimating total sedentary time during waking hours. Agreement was illustrated using Bland and Altman plots, and intra-individual agreement for posture was assessed with the Kappa statistic. RESULTS Estimates of average total sedentary time over 24-h were 623 (SD 103) min/day from the GENEActiv, and 626 (SD 123) min/day from the activPAL, with an Intraclass Correlation Coefficient of 0.80 (95% confidence intervals 0.68-0.88). Bland and Altman plots showed slight underestimation of mean total sedentary time for GENEActiv relative to activPAL (mean difference: -3.44min/day), with moderate limits of agreement (-144 to 137min/day). Mean Kappa for posture was 0.53 (SD 0.12), indicating moderate agreement for this sample at the individual level. CONCLUSIONS The estimation of sedentary time by posture classification of the wrist-worn GENEActiv accelerometer was comparable to the activPAL. The GENEActiv may provide an alternative, easy to wear device based measure for descriptive estimates of sedentary time in population samples
Resumo:
Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.
Resumo:
Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.
Resumo:
This ‘how to’ guide provides readers with method to measure fan performance and energy efficiency of fans installed in meat chicken sheds. These methods are also useful for identifying fans that are under-performing or require maintenance. For more information about fan energy efficiency, a complementary report is available on the RIRDC website ‘Review of fan efficiency in meat chicken sheds’ (RIRDC Publication No. 15/018). A spreadsheet was also developed under this project for comparing and ranking fans against others in terms of energy efficiency, air flow and costs (‘Tunnel Ventilation Fan Comparison Spreadsheet’), and is available on the RIRDC website.
Resumo:
The INFORMAS food prices module proposes a step-wise framework to measure the cost and affordability of population diets. The price differential and the tax component of healthy and less healthy foods, food groups, meals and diets will be benchmarked and monitored over time. Results can be used to model or assess the impact of fiscal policies, such as ‘fat taxes’ or subsidies. Key methodological challenges include: defining healthy and less healthy foods, meals, diets and commonly consumed items; including costs of alcohol, takeaways, convenience foods and time; selecting the price metric; sampling frameworks; and standardizing collection and analysis protocols. The minimal approach uses three complementary methods to measure the price differential between pairs of healthy and less healthy foods. Specific challenges include choosing policy relevant pairs and defining an anchor for the lists. The expanded approach measures the cost of a healthy diet compared to the current (less healthy) diet for a reference household. It requires dietary principles to guide the development of the healthy diet pricing instrument and sufficient information about the population’s current intake to inform the current (less healthy) diet tool. The optimal approach includes measures of affordability and requires a standardised measure of household income that can be used for different countries. The feasibility of implementing the protocol in different countries is being tested in New Zealand, Australia and Fiji. The impact of different decision points to address challenges will be investigated in a systematic manner. We will present early insights and results from this work.
Resumo:
- Background Expressed emotion (EE) captures the affective quality of the relationship between family caregivers and their care recipients and is known to increase the risk of poor health outcomes for caregiving dyads. Little is known about expressed emotion in the context of caregiving for persons with dementia, especially in non-Western cultures. The Family Attitude Scale (FAS) is a psychometrically sound self-reporting measure for EE. Its use in the examination of caregiving for patients with dementia has not yet been explored. - Objectives This study was performed to examine the psychometric properties of the Chinese version of the FAS (FAS-C) in Chinese caregivers of relatives with dementia, and its validity in predicting severe depressive symptoms among the caregivers. - Methods The FAS was translated into Chinese using Brislin's model. Two expert panels evaluated the semantic equivalence and content validity of this Chinese version (FAS-C), respectively. A total of 123 Chinese primary caregivers of relatives with dementia were recruited from three elderly community care centers in Hong Kong. The FAS-C was administered with the Chinese versions of the 5-item Mental Health Inventory (MHI-5), the Zarit Burden Interview (ZBI) and the Revised Memory and Behavioral Problem Checklist (RMBPC). - Results The FAS-C had excellent semantic equivalence with the original version and a content validity index of 0.92. Exploratory factor analysis identified a three-factor structure for the FAS-C (hostile acts, criticism and distancing). Cronbach's alpha of the FAS-C was 0.92. Pearson's correlation indicated that there were significant associations between a higher score on the FAS-C and greater caregiver burden (r = 0.66, p < 0.001), poorer mental health of the caregivers (r = −0.65, p < 0.001) and a higher level of dementia-related symptoms (frequency of symptoms: r = 0.45, p < 0.001; symptom disturbance: r = 0.51, p < 0.001), which serves to suggest its construct validity. For detecting severe depressive symptoms of the family caregivers, the receiving operating characteristics (ROC) curve had an area under curve of 0.78 (95% confidence interval (CI) = 0.69–0.87, p < 0.0001). The optimal cut-off score was >47 with a sensitivity of 0.720 (95% CI = 0.506–0.879) and specificity of 0.742 (95% CI = 0.643–0.826). - Conclusions The FAS-C is a reliable and valid measure to assess the affective quality of the relationship between Chinese caregivers and their relatives with dementia. It also has acceptable predictability in identifying family caregivers with severe depressive symptoms.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
Myotonic dystrophies type 1 (DM1) and type 2 (DM2) are the most common forms of muscular dystrophy affecting adults. They are autosomal dominant diseases caused by microsatellite tri- or tetranucleotide repeat expansion mutations in transcribed but not translated gene regions. The mutant RNA accumulates in nuclei disturbing the expression of several genes. The more recently identified DM2 disease is less well known, yet more than 300 patients have been confirmed in Finland thus far, and the true number is believed to be much higher. DM1 and DM2 share some features in general clinical presentation and molecular pathology, yet they show distinctive differences, including disease severity and differential muscle and fiber type involvement. However, the molecular differences underlying DM1 and DM2 muscle pathology are not well understood. Although the primary tissue affected is muscle, both DMs show a multisystemic phenotype due to wide expression of the mutation-carrying genes. DM2 is particularly intriguing, as it shows an incredibly wide spectrum of clinical manifestations. For this reason, it constitutes a real diagnostic challenge. The core symptoms in DM2 include proximal muscle weakness, muscle pain, myotonia, cataracts, cardiac conduction defects and endocrinological disturbations; however, none of these is mandatory for the disease. Myalgic pains may be the most disabling symptom for decades, sometimes leading to incapacity for work. In addition, DM2 may cause major socio-economical consequences for the patient, if not diagnosed, due to misunderstanding and false stigmatization. In this thesis work, we have (I) improved DM2 differential diagnostics based on muscle biopsy, and (II) described abnormalities in mRNA and protein expression in DM1 and DM2 patient skeletal muscles, showing partial differences between the two diseases, which may contribute to muscle pathology in these diseases. This is the first description of histopathological differences between DM1 and DM2, which can be used in differential diagnostics. Two novel high-resolution applications of in situ -hybridization have been described, which can be used for direct visualization of the DM2 mutation in muscle biopsy sections, or mutation size determination on extended DNA-fibers. By measuring protein and mRNA expression in the samples, differential changes in expression patterns affecting contractile proteins, other structural proteins and calcium handling proteins in DM2 compared to DM1 were found. The dysregulation at mRNA level was caused by altered transciption and abnormal splicing. The findings reported here indicate that the extent of aberrant splicing is higher in DM2 compared to DM1. In addition, the described abnormalities to some extent correlate to the differences in fiber type involvement in the two disorders.
Resumo:
The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.
Resumo:
A study of environmental chloride and groundwater balance has been carried out in order to estimate their relative value for measuring average groundwater recharge under a humid climatic environment with a relatively shallow water table. The hybrid water fluctuation method allowed the split of the hydrologic year into two seasons of recharge (wet season) and no recharge (dry season) to appraise specific yield during the dry season and, second, to estimate recharge from the water table rise during the wet season. This well elaborated and suitable method has then been used as a standard to assess the effectiveness of the chloride method under forest humid climatic environment. Effective specific yield of 0.08 was obtained for the study area. It reflects an effective basin-wide process and is insensitive to local heterogeneities in the aquifer system. The hybrid water fluctuation method gives an average recharge value of 87.14 mm/year at the basin scale, which represents 5.7% of the annual rainfall. Recharge value estimated based on the chloride method varies between 16.24 and 236.95 mm/year with an average value of 108.45 mm/year. It represents 7% of the mean annual precipitation. The discrepancy observed between recharge value estimated by the hybrid water fluctuation and the chloride mass balance methods appears to be very important, which could imply the ineffectiveness of the chloride mass balance method for this present humid environment.
Resumo:
This work analyses the influence of several design methods on the degree of creativity of the design outcome. A design experiment has been carried out in which the participants were divided into four teams of three members, and each team was asked to work applying different design methods. The selected methods were Brainstorming, Functional Analysis, and SCAMPER method. The `degree of creativity' of each design outcome is assessed by means of a questionnaire offered to a number of experts and by means of three different metrics: the metric of Moss, the metric of Sarkar and Chakrabarti, and the evaluation of innovative potential. The three metrics share the property of measuring the creativity as a combination of the degree of novelty and the degree of usefulness. The results show that Brainstorming provides more creative outcomes than when no method is applied, while this is not proved for SCAMPER and Functional Analysis.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Assessment of Microscale Test Methods of Peeling and Splitting along Surface of Thin-Film/Substrates
Resumo:
Peel test methods are assessed through being applied to a peeling analysis of the ductile film/ceramic substrate system. Through computing the fracture work of the system using the either beam bend model (BB model) or the general plane analysis model (GPA model), surprisingly, a big difference between both model results is found. Although the BB model can capture the plastic dissipation phenomenon for the ductile film case as the GPA model can, it is much sensitive to the choice of the peeling criterion parameters, and it overestimates the plastic bending effect unable to capture crack tip constraint plasticity. In view of the difficulty of measuring interfacial toughness using peel test method when film is the ductile material, a new test method, split test, is recommended and analyzed using the GPA model. The prediction is applied to a wedge-loaded experiment for Al-alloy double-cantilever beam in literature.