106 resultados para Experience Sampling Methods
em Queensland University of Technology - ePrints Archive
Resumo:
This paper discusses the choice to use two less conventional or “interesting” research methods, Q Methodology and Experience Sampling Method, rather than “status quo” research methods so common in the marketing discipline. It is argued that such methods have value for marketing academics because they widen the potential for discovery. The paper outlines these two research methods, providing examples of how they have been used in an experiential consumption perspective. Additionally the paper identifies some of the challenges to be faced when trying to publish research that use such less conventional methods, as well as offering suggestions to address them.
Resumo:
This paper reports the feasibility and methodological considerations of using the Short Message System Experience Sampling (SMS-ES) Method, which is an experience sampling research method developed to assist researchers to collect repeat measures of consumers’ affective experiences. The method combines SMS with web-based technology in a simple yet effective way. It is described using a practical implementation study that collected consumers’ emotions in response to using mobile phones in everyday situations. The method is further evaluated in terms of the quality of data collected in the study, as well as against the methodological considerations for experience sampling studies. These two evaluations suggest that the SMS-ES Method is both a valid and reliable approach for collecting consumers’ affective experiences. Moreover, the method can be applied across a range of for-profit and not-for-profit contexts where researchers want to capture repeated measures of consumers’ affective experiences occurring over a period of time. The benefits of the method are discussed to assist researchers who wish to apply the SMS-ES Method in their own research designs.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
Objectives: This methodological paper reports on the development and validation of a work sampling instrument and data collection processes to conduct a national study of nurse practitioners’ work patterns. ---------- Design: Published work sampling instruments provided the basis for development and validation of a tool for use in a national study of nurse practitioner work activities across diverse contextual and clinical service models. Steps taken in the approach included design of a nurse practitioner-specific data collection tool and development of an innovative web-based program to train and establish inter rater reliability of a team of data collectors who were geographically dispersed across metropolitan, rural and remote health care settings. ---------- Setting: The study is part of a large funded study into nurse practitioner service. The Australian Nurse Practitioner Study is a national study phased over three years and was designed to provide essential information for Australian health service planners, regulators and consumer groups on the profile, process and outcome of nurse practitioner service. ---------- Results: The outcome if this phase of the study is empirically tested instruments, process and training materials for use in an international context by investigators interested in conducting a national study of nurse practitioner work practices. ---------- Conclusion: Development and preparation of a new approach to describing nurse practitioner practices using work sampling methods provides the groundwork for international collaboration in evaluation of nurse practitioner service.
Resumo:
Aim: This paper is a report of a study of variations in the pattern of nurse practitioner work in a range of service fields and geographical locations, across direct patient care, indirect patient care and service-related activities. Background. The nurse practitioner role has been implemented internationally as a service reform model to improve the access and timeliness of health care. There is a substantial body of research into the nurse practitioner role and service outcomes, but scant information on the pattern of nurse practitioner work and how this is influenced by different service models. --------- Methods: We used work sampling methods. Data were collected between July 2008 and January 2009. Observations were recorded from a random sample of 30 nurse practitioners at 10-minute intervals in 2-hour blocks randomly generated to cover two weeks of work time from a sampling frame of six weeks. --------- Results: A total of 12,189 individual observations were conducted with nurse practitioners across Australia. Thirty individual activities were identified as describing nurse practitioner work, and these were distributed across three categories. Direct care accounted for 36.1% of how nurse practitioners spend their time, indirect care accounted for 32.2% and service-related activities made up 31.9%. --------- Conclusion. These findings provide useful baseline data for evaluation of nurse practitioner positions and the service effect of these positions. However, the study also raises questions about the best use of nurse practitioner time and the influences of barriers to and facilitators of this model of service innovation.
Resumo:
Acoustic sensors provide an effective means of monitoring biodiversity at large spatial and temporal scales. They can continuously and passively record large volumes of data over extended periods, however these data must be analysed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced users can produce accurate results, however the time and effort required to process even small volumes of data can make manual analysis prohibitive. Our research examined the use of sampling methods to reduce the cost of analysing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilising five days of manually analysed acoustic sensor data from four sites, we examined a range of sampling rates and methods including random, stratified and biologically informed. Our findings indicate that randomly selecting 120, one-minute samples from the three hours immediately following dawn provided the most effective sampling method. This method detected, on average 62% of total species after 120 one-minute samples were analysed, compared to 34% of total species from traditional point counts. Our results demonstrate that targeted sampling methods can provide an effective means for analysing large volumes of acoustic sensor data efficiently and accurately.
Resumo:
Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1
Resumo:
Phosphorus has a number of indispensable biochemical roles, but its natural deposition and the low solubility of phosphates as well as their rapid transformation to insoluble forms make the element commonly the growth-limiting nutrient, particularly in aquatic ecosystems. Famously, phosphorus that reaches water bodies is commonly the main cause of eutrophication. This undesirable process can severely affect many aquatic biotas in the world. More management practices are proposed but long-term monitoring of phosphorus level is necessary to ensure that the eutrophication won't occur. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing flow proportional load as well as representative average of concentrations of phosphorus in the environment. Although some types of passive samplers are commercially available, their uses are still scarcely reported in the literature. In Japan, there is limited application of passive sampling technique to monitor phosphorus even in the field of agricultural environment. This paper aims to introduce the relatively new P-sampling techniques and their potential to use in environmental monitoring studies.
Resumo:
As there are a myriad of micro organic pollutants that can affect the well-being of human and other organisms in the environment the need for an effective monitoring tool is eminent. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing time-integrated load as well as representative average of concentrations of pollutants in the environment. Those techniques have been applied to monitor many pollutants caused by agricultural activities, i.e. residues of pesticides, veterinary drugs and so on. Several types of passive samplers are commercially available and their uses are widely accepted. However, not many applications of those techniques have been found in Japan, especially in the field of agricultural environment. This paper aims to introduce the field of passive sampling and then to describe some applications of passive sampling techniques in environmental monitoring studies related to the agriculture industry.
Resumo:
Although marketers have a strong interest in finding ways to engage with consumers through mobile phones, the everyday experiential, or affective consumption practices surrounding this technology have received limited attention in the literature. To address this limitation, we used appraisal theory, which specifies it is the way individuals appraise situations or events that elicit emotions. We conducted an experience sampling method study to explore the emotions that individuals experience during their interactions with and through their mobile phones and what situations or events elicit these emotions. The preliminary findings show a number of significant relationships between emotions and specfic clusters of situations and events. Additionally, age and gender were also important indicators. The research contributes to a deeper understanding of the experiential nature of mobile information technologies through consumers’ everyday-consumption-related emotions and the situations and events that elicit them.
Resumo:
The value of soil evidence in the forensic discipline is well known. However, it would be advantageous if an in-situ method was available that could record responses from tyre or shoe impressions in ground soil at the crime scene. The development of optical fibres and emerging portable NIR instruments has unveiled a potential methodology which could permit such a proposal. The NIR spectral region contains rich chemical information in the form of overtone and combination bands of the fundamental infrared absorptions and low-energy electronic transitions. This region has in the past, been perceived as being too complex for interpretation and consequently was scarcely utilized. The application of NIR in the forensic discipline is virtually non-existent creating a vacancy for research in this area. NIR spectroscopy has great potential in the forensic discipline as it is simple, nondestructive and capable of rapidly providing information relating to chemical composition. The objective of this study is to investigate the ability of NIR spectroscopy combined with Chemometrics to discriminate between individual soils. A further objective is to apply the NIR process to a simulated forensic scenario where soil transfer occurs. NIR spectra were recorded from twenty-seven soils sampled from the Logan region in South-East Queensland, Australia. A series of three high quartz soils were mixed with three different kaolinites in varying ratios and NIR spectra collected. Spectra were also collected from six soils as the temperature of the soils was ramped from room temperature up to 6000C. Finally, a forensic scenario was simulated where the transferral of ground soil to shoe soles was investigated. Chemometrics methods such as the commonly known Principal Component Analysis (PCA), the less well known fuzzy clustering (FC) and ranking by means of multicriteria decision making (MCDM) methodology were employed to interpret the spectral results. All soils were characterised using Inductively Coupled Plasma Optical Emission Spectroscopy and X-Ray Diffractometry. Results were promising revealing NIR combined with Chemometrics is capable of discriminating between the various soils. Peak assignments were established by comparing the spectra of known minerals with the spectra collected from the soil samples. The temperature dependent NIR analysis confirmed the assignments of the absorptions due to adsorbed and molecular bound water. The relative intensities of the identified NIR absorptions reflected the quantitative XRD and ICP characterisation results. PCA and FC analysis of the raw soils in the initial NIR investigation revealed that the soils were primarily distinguished on the basis of their relative quartz and kaolinte contents, and to a lesser extent on the horizon from which they originated. Furthermore, PCA could distinguish between the three kaolinites used in the study, suggesting that the NIR spectral region was sensitive enough to contain information describing variation within kaolinite itself. The forensic scenario simulation PCA successfully discriminated between the ‘Backyard Soil’ and ‘Melcann® Sand’, as well as the two sampling methods employed. Further PCA exploration revealed that it was possible to distinguish between the various shoes used in the simulation. In addition, it was possible to establish association between specific sampling sites on the shoe with the corresponding site remaining in the impression. The forensic application revealed some limitations of the process relating to moisture content and homogeneity of the soil. These limitations can both be overcome by simple sampling practices and maintaining the original integrity of the soil. The results from the forensic scenario simulation proved that the concept shows great promise in the forensic discipline.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Tobacco yellow dwarf virus (TbYDV, family Geminiviridae, genus Mastrevirus) is an economically important pathogen causing summer death and yellow dwarf disease in bean (Phaseolus vulgaris L.) and tobacco (Nicotiana tabacum L.), respectively. Prior to the commencement of this project, little was known about the epidemiology of TbYDV, its vector and host-plant range. As a result, disease control strategies have been restricted to regular poorly timed insecticide applications which are largely ineffective, environmentally hazardous and expensive. In an effort to address this problem, this PhD project was carried out in order to better understand the epidemiology of TbYDV, to identify its host-plant and vectors as well as to characterise the population dynamics and feeding physiology of the main insect vector and other possible vectors. The host-plants and possible leafhopper vectors of TbYDV were assessed over three consecutive growing seasons at seven field sites in the Ovens Valley, Northeastern Victoria, in commercial tobacco and bean growing properties. Leafhoppers and plants were collected and tested for the presence of TbYDV by PCR. Using sweep nets, twenty-three leafhopper species were identified at the seven sites with Orosius orientalis the predominant leafhopper. Of the 23 leafhopper species screened for TbYDV, only Orosius orientalis and Anzygina zealandica tested positive. Forty-two different plant species were also identified at the seven sites and tested. Of these, TbYDV was only detected in four dicotyledonous species, Amaranthus retroflexus, Phaseolus vulgaris, Nicotiana tabacum and Raphanus raphanistrum. Using a quadrat survey, the temporal distribution and diversity of vegetation at four of the field sites was monitored in order to assess the presence of, and changes in, potential host-plants for the leafhopper vector(s) and the virus. These surveys showed that plant composition and the climatic conditions at each site were the major influences on vector numbers, virus presence and the subsequent occurrence of tobacco yellow dwarf and bean summer death diseases. Forty-two plant species were identified from all sites and it was found that sites with the lowest incidence of disease had the highest proportion of monocotyledonous plants that are non hosts for both vector and the virus. In contrast, the sites with the highest disease incidence had more host-plant species for both vector and virus, and experienced higher temperatures and less rainfall. It is likely that these climatic conditions forced the leafhopper to move into the irrigated commercial tobacco and bean crop resulting in disease. In an attempt to understand leafhopper species diversity and abundance, in and around the field borders of commercially grown tobacco crops, leafhoppers were collected from four field sites using three different sampling techniques, namely pan trap, sticky trap and sweep net. Over 51000 leafhopper samples were collected, which comprised 57 species from 11 subfamilies and 19 tribes. Twentythree leafhopper species were recorded for the first time in Victoria in addition to several economically important pest species of crops other than tobacco and bean. The highest number and greatest diversity of leafhoppers were collected in yellow pan traps follow by sticky trap and sweep nets. Orosius orientalis was found to be the most abundant leafhopper collected from all sites with greatest numbers of this leafhopper also caught using the yellow pan trap. Using the three sampling methods mentioned above, the seasonal distribution and population dynamics of O. orientalis was studied at four field sites over three successive growing seasons. The population dynamics of the leafhopper was characterised by trimodal peaks of activity, occurring in the spring and summer months. Although O. orientalis was present in large numbers early in the growing season (September-October), TbYDV was only detected in these leafhoppers between late November and the end of January. The peak in the detection of TbYDV in O. orientalis correlated with the observation of disease symptoms in tobacco and bean and was also associated with warmer temperatures and lower rainfall. To understand the feeding requirements of Orosius orientalis and to enable screening of potential control agents, a chemically-defined artificial diet (designated PT-07) and feeding system was developed. This novel diet formulation allowed survival for O. orientalis for up to 46 days including complete development from first instar through to adulthood. The effect of three selected plant derived proteins, cowpea trypsin inhibitor (CpTi), Galanthus nivalis agglutinin (GNA) and wheat germ agglutinin (WGA), on leafhopper survival and development was assessed. Both GNA and WGA were shown to reduce leafhopper survival and development significantly when incorporated at a 0.1% (w/v) concentration. In contrast, CpTi at the same concentration did not exhibit significant antimetabolic properties. Based on these results, GNA and WGA are potentially useful antimetabolic agents for expression in genetically modified crops to improve the management of O. orientalis, TbYDV and the other pathogens it vectors. Finally, an electrical penetration graph (EPG) was used to study the feeding behaviour of O. orientalis to provide insights into TbYDV acquisition and transmission. Waveforms representing different feeding activity were acquired by EPG from adult O. orientalis feeding on two plant species, Phaseolus vulgaris and Nicotiana tabacum and a simple sucrose-based artificial diet. Five waveforms (designated O1-O5) were observed when O. orientalis fed on P. vulgaris, while only four (O1-O4) and three (O1-O3) waveforms were observed during feeding on N. tabacum and the artificial diet, respectively. The mean duration of each waveform and the waveform type differed markedly depending on the food source. This is the first detailed study on the tritrophic interactions between TbYDV, its leafhopper vector, O. orientalis, and host-plants. The results of this research have provided important fundamental information which can be used to develop more effective control strategies not only for O. orientalis, but also for TbYDV and other pathogens vectored by the leafhopper.