309 resultados para field methods
Resumo:
This position paper provides an overview of work conducted and an outlook of future directions within the field of Information Retrieval (IR) that aims to develop novel models, methods and frameworks inspired by Quantum Theory (QT).
Resumo:
Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.
Resumo:
Background: Integrating 3D virtual world technologies into educational subjects continues to draw the attention of educators and researchers alike. The focus of this study is the use of a virtual world, Second Life, in higher education teaching. In particular, it explores the potential of using a virtual world experience as a learning component situated within a curriculum delivered predominantly through face-to-face teaching methods. Purpose: This paper reports on a research study into the development of a virtual world learning experience designed for marketing students taking a Digital Promotions course. The experience was a field trip into Second Life to allow students to investigate how business branding practices were used for product promotion in this virtual world environment. The paper discusses the issues involved in developing and refining the virtual course component over four semesters. Methods: The study used a pedagogical action research approach, with iterative cycles of development, intervention and evaluation over four semesters. The data analysed were quantitative and qualitative student feedback collected after each field trip as well as lecturer reflections on each cycle. Sample: Small-scale convenience samples of second- and third-year students studying in a Bachelor of Business degree, majoring in marketing, taking the Digital Promotions subject at a metropolitan university in Queensland, Australia participated in the study. The samples included students who had and had not experienced the field trip. The numbers of students taking part in the field trip ranged from 22 to 48 across the four semesters. Findings and Implications: The findings from the four iterations of the action research plan helped identify key considerations for incorporating technologies into learning environments. Feedback and reflections from the students and lecturer suggested that an innovative learning opportunity had been developed. However, pedagogical potential was limited, in part, by technological difficulties and by student perceptions of relevance.
Resumo:
Purpose. The Useful Field of View (UFOV(R)) test has been shown to be highly effective in predicting crash risk among older adults. An important question which we examined in this study is whether this association is due to the ability of the UFOV to predict difficulties in attention-demanding driving situations that involve either visual or auditory distracters. Methods. Participants included 92 community-living adults (mean age 73.6 +/- 5.4 years; range 65-88 years) who completed all three subtests of the UFOV involving assessment of visual processing speed (subtest 1), divided attention (subtest 2), and selective attention (subtest 3); driving safety risk was also classified using the UFOV scoring system. Driving performance was assessed separately on a closed-road circuit while driving under three conditions: no distracters, visual distracters, and auditory distracters. Driving outcome measures included road sign recognition, hazard detection, gap perception, time to complete the course, and performance on the distracter tasks. Results. Those rated as safe on the UFOV (safety rating categories 1 and 2), as well as those responding faster than the recommended cut-off on the selective attention subtest (350 msec), performed significantly better in terms of overall driving performance and also experienced less interference from distracters. Of the three UFOV subtests, the selective attention subtest best predicted overall driving performance in the presence of distracters. Conclusions. Older adults who were rated as higher risk on the UFOV, particularly on the selective attention subtest, demonstrated poorest driving performance in the presence of distracters. This finding suggests that the selective attention subtest of the UFOV may be differentially more effective in predicting driving difficulties in situations of divided attention which are commonly associated with crashes.
Resumo:
Conducting research into crime and criminal justice carries unique challenges. This Handbook focuses on the application of 'methods' to address the core substantive questions that currently motivate contemporary criminological research. It maps a canon of methods that are more elaborated than in most other fields of social science, and the intellectual terrain of research problems with which criminologists are routinely confronted. Drawing on exemplary studies, chapters in each section illustrate the techniques (qualitative and quantitative) that are commonly applied in empirical studies, as well as the logic of criminological enquiry. Organized into five sections, each prefaced by an editorial introduction, the Handbook covers: • Crime and Criminals • Contextualizing Crimes in Space and Time: Networks, Communities and Culture • Perceptual Dimensions of Crime • Criminal Justice Systems: Organizations and Institutions • Preventing Crime and Improving Justice Edited by leaders in the field of criminological research, and with contributions from internationally renowned experts, The SAGE Handbook of Criminological Research Methods is set to become the definitive resource for postgraduates, researchers and academics in criminology, criminal justice, policing, law, and sociology.
Resumo:
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Purpose: To investigate the correlations of the global flash multifocal electroretinogram (MOFO mfERG) with common clinical visual assessments – Humphrey perimetry and Stratus circumpapillary retinal nerve fiber layer (RNFL) thickness measurement in type II diabetic patients. Methods: Forty-two diabetic patients participated in the study: ten were free from diabetic retinopathy (DR) while the remainder suffered from mild to moderate non-proliferative diabetic retinopathy (NPDR). Fourteen age-matched controls were recruited for comparison. MOFO mfERG measurements were made under high and low contrast conditions. Humphrey central 30-2 perimetry and Stratus OCT circumpapillary RNFL thickness measurements were also performed. Correlations between local values of implicit time and amplitude of the mfERG components (direct component (DC) and induced component (IC)), and perimetric sensitivity and RNFL thickness were evaluated by mapping the localized responses for the three subject groups. Results: MOFO mfERG was superior to perimetry and RNFL assessments in showing differences between the diabetic groups (with and without DR) and the controls. All the MOFO mfERG amplitudes (except IC amplitude at high contrast) correlated better with perimetry findings (Pearson’s r ranged from 0.23 to 0.36, p<0.01) than did the mfERG implicit time at both high and low contrasts across all subject groups. No consistent correlation was found between the mfERG and RNFL assessments for any group or contrast conditions. The responses of the local MOFO mfERG correlated with local perimetric sensitivity but not with RNFL thickness. Conclusion: Early functional changes in the diabetic retina seem to occur before morphological changes in the RNFL.
Resumo:
The Texas Department of Transportation (TxDOT) is concerned about the widening gap between pavement preservation needs and available funding. Thus, the TxDOT Austin District Pavement Engineer (DPE) has investigated methods to strategically allocate available pavement funding to potential projects that improve the overall performance of the District and Texas highway systems. The primary objective of the study presented in this paper is to develop a network-level project screening and ranking method that supports the Austin District 4-year pavement management plan development. The study developed candidate project selection and ranking algorithms that evaluated pavement conditions of each project candidate using data contained in the Pavement Management Information system (PMIS) database and incorporated insights from Austin District pavement experts; and implemented the developed method and supporting algorithm. This process previously required weeks to complete, but now requires about 10 minutes including data preparation and running the analysis algorithm, which enables the Austin DPE to devote more time and resources to conducting field visits, performing project-level evaluation and testing candidate projects. The case study results showed that the proposed method assisted the DPE in evaluating and prioritizing projects and allocating funds to the right projects at the right time.
Resumo:
Aims/hypothesis: Impaired central vision has been shown to predict diabetic peripheral neuropathy (DPN). Several studies have demonstrated diffuse retinal neurodegenerative changes in diabetic patients prior to retinopathy development, raising the prospect that non-central vision may also be compromised by primary neural damage. We hypothesise that type 2 diabetic patients with DPN exhibit visual sensitivity loss in a distinctive pattern across the visual field, compared with a control group of type 2 diabetic patients without DPN. Methods: Increment light sensitivity was measured by standard perimetry in the central 30 degree of visual field for two age-matched groups of type 2 diabetic patients, with and without neuropathy (n=40/30). Neuropathy status was assigned using the neuropathy disability score. Mean visual sensitivity values were calculated globally, for each quadrant and for three eccentricities (0-10 degree , 11-20 degree and 21-30 degree ). Data were analysed using a generalised additive mixed model (GAMM). Results: Global and quadrant between-group visual sensitivity mean differences were marginally but consistently lower (by about 1 dB) in the neuropathy cohort compared with controls. Between-group mean differences increased from 0.36 to 1.81 dB with increasing eccentricity. GAMM analysis, after adjustment for age, showed these differences to be significant beyond 15 degree eccentricity and monotonically increasing. Retinopathy levels and disease duration were not significant factors within the model (p=0.90). Conclusions/interpretation: Visual sensitivity reduces disproportionately with increasing eccentricity in type 2 diabetic patients with peripheral neuropathy. This sensitivity reduction within the central 30 degree of visual field may be indicative of more consequential loss in the far periphery.
Resumo:
When wheels pass over insulated rail joints (IRJs) a vertical impact force is generated. The ability to measure the impact force is valuable as the force signature helps understand the behaviour of the IRJs, in particular their potential for failure. The impact forces are thought to be one of the main factors that cause damage to the IRJ and track components. Study of the deterioration mechanism helps finding new methods to improve the service life of IRJs in track. In this research, the strain-gage-based wheel load detector, for the first time, is employed to measure the wheel–rail contact-impact force at an IRJ in a heavy haul rail line. In this technique, the strain gages are installed within the IRJ assembly without disturbing the structural integrity of IRJ and arranged in a full wheatstone bridge to form a wheel load detector. The instrumented IRJ is first tested and calibrated in the lab and then installed in the field. For comparison purposes, a reference rail section is also instrumented with the same strain gage pattern as the IRJ. In this paper the measurement technique, the process of instrumentation, and tests as well as some typical data obtained from the field and the inferences are presented.
Resumo:
This paper documents the use of bibliometrics as a methodology to bring forth a structured, systematic and rigorous way to analyse and evaluate a range of literature. When starting out and reading broadly for my doctoral studies, one article by Trigwell and Prosser (1996b) led me to reflect about my level of comprehension as the content, concepts and methodology did not resonate with my epistemology. A disconnection between our paradigms emerged. Further reading unveiled the work by Doyle (1987) who categorised research in teaching and teacher education by three main areas: teacher characteristics, methods research and teacher behaviour. My growing concerns that there were gaps in the knowledge also exposed the difficulties in documenting said gaps. As an early researcher who required support to locate myself in the field and to find my research voice, I identified bibliometrics (Budd, 1988; Yeoh & Kaur, 2007) as an appropriate methodology to add value and rigour in three ways. Firstly, the application of bibliometrics to analyse articles is systematic, builds a picture from the characteristics of the literature, and offers a way to elicit themes within the categories. Secondly, by systematic analysis there is occasion to identify gaps within the body of work, limitations in methodology or areas in need of further research. Finally, extension and adaptation of the bibliometrics methodology, beyond citation or content analysis, to investigate the merit of methodology, participants and instruments as a determinant for research worth allowed the researcher to build confidence and contribute new knowledge to the field. Therefore, this paper frames research in the pedagogic field of Higher Education through teacher characteristics, methods research and teacher behaviour, visually represents the literature analysis and locates my research self within methods research. Through my research voice I will present the bibliometrics methodology, the outcomes and document the landscape of pedagogy in the field of Higher Education.
Resumo:
The purpose of this paper is to report on a methods research project investigating the evaluation of diverse teaching practice in Higher Education. The research method is a single site case study of an Australian university with data collected through published documents, surveys, interviews and focus groups. This project provides evidence of the wide variety of evaluation practice and diverse teaching practice across the university. This breadth identifies the need for greater flexibility of evaluation processes, tools and support to assist teaching staff to evaluate their diverse teaching practice. The employment opportunities for academics benchmark the university nationally and position the case study in the field. Finally this reaffirms the institutional responsibility for services to support teaching staff in an ongoing manner.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.