102 resultados para valeur informative
em Queensland University of Technology - ePrints Archive
Resumo:
This study investigated the Kinaesthetic Fusion Effect (KFE) first described by Craske and Kenny in 1981. The current study did not replicate these findings. Participants did not perceive any reduction in the sagittal separation of a button pressed by the index finger of one arm and a probe touching the other, following repeated exposure to the tactile stimuli present on both unseen arms. This study’s failure to replicate the widely-cited KFE as described by Craske et al. (1984) suggests that it may be contingent on several aspects of visual information, especially the availability of a specific visual reference, the role of instructions regarding gaze direction, and the potential use of a line of sight strategy when referring felt positions to an interposed surface. In addition, a foreshortening effect was found; this may result from a line-of-sight judgment and represent a feature of the reporting method used. The transformed line of sight data were regressed against the participant reported values, resulting in a slope of 1.14 (right arm) and 1.11 (left arm), and r > 0.997 for each. The study also provides additional evidence that mis-perceptions of the mediolateral position of the limbs specifically their separation and consistent with notions of Gestalt grouping, is somewhat labile and can be influenced by active motions causing touch of one limb by the other. Finally, this research will benefit future studies that require participants to report the perceived locations of the unseen limbs.
Resumo:
This paper presents a method for automatic terrain classification, using a cheap monocular camera in conjunction with a robot’s stall sensor. A first step is to have the robot generate a training set of labelled images. Several techniques are then evaluated for preprocessing the images, reducing their dimensionality, and building a classifier. Finally, the classifier is implemented and used online by an indoor robot. Results are presented, demonstrating an increased level of autonomy.
Resumo:
This paper establishes sufficient conditions to bound the error in perturbed conditional mean estimates derived from a perturbed model (only the scalar case is shown in this paper but a similar result is expected to hold for the vector case). The results established here extend recent stability results on approximating information state filter recursions to stability results on the approximate conditional mean estimates. The presented filter stability results provide bounds for a wide variety of model error situations.
Resumo:
Based on the theory of international stock market co-movements, this study shows that a profitable trading strategy can be developed. The U.S. market return is considered as overnight information by ordinary investors in the Asian and the European stock markets, and opening prices in local markets reflect the U.S. overnight return. However, smart traders would either judge the impact of overnight information more correctly, or predict unreleased information. Thus, the difference between expected opening prices based on the U.S. return and actual opening prices is counted as smart traders’ prediction power, which is either a buy or a sell signal. Using index futures price data from 12 countries from 2000 to 2011, cumulative returns on the trading strategy are calculated with taking into account transaction costs. The empirical results show that the proposed trading strategy generates higher riskadjusted returns than that of the benchmarks in 12 sample countries. The trading performances for the Asian markets surpass those for the European markets because the U.S. return is the only overnight information for the Asian markets whereas the Asian markets returns are additional information to the European investors.
Resumo:
Purpose: Flat-detector, cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. Methods: The rich sources of prior information in IGRT are incorporated into a hidden Markov random field (MRF) model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk (OAR). The voxel labels are estimated using the iterated conditional modes (ICM) algorithm. Results: The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom (CIRS, Inc. model 062). The mean voxel-wise misclassification rate was 6.2%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. Conclusions: By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
Cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. The rich sources of prior information in IGRT are incorporated into a hidden Markov random field model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk. The voxel labels are estimated using iterated conditional modes. The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom. The mean voxel-wise misclassification rate was 6.2\%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
Gas-phase transformation of synthetic phosphatidylcholine (PC) monocations to structurally informative anions is demonstrated via ion/ion reactions with doubly deprotonated 1,4-phenylenedipropionic acid (PDPA). Two synthetic PC isomers, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (PC16:0/18:1) and 1-oleoyl-2-palmitoyl-sn-glycero-3-phosphocholine (PC18:1/16:0), were subjected to this ion/ion chemistry. The product of the ion/ion reaction is a negatively charged complex, \[PC + PDPA - H](-). Collisional activation of the long-lived complex causes transfer of a proton and methyl cation to PDPA, generating \[PC - CH3](-). Subsequent collisional activation of the demethylated PC anions produces abundant fatty acid carboxylate anions and low-abundance acyl neutral losses as free acids and ketenes. Product ion spectra of \[PC - CH3](-) suggest favorable cleavage at the sn-2 position over the sn-1 due to distinct differences in the relative abundances. In contrast, collisional activation of PC cations is absent of abundant fatty acid chain-related product ions and typically indicates only the lipid class via formation of the phosphocholine cation. A solution phase method to produce the gas-phase adducted PC anion is also demonstrated. Product ion spectra derived from the solution phase method are similar to the results generated via ion/ion chemistry. This work demonstrates a gas-phase means to increase structural characterization of phosphatidylcholines via ion/ion chemistry. Grant Number ARC/CE0561607, ARC/DP120102922
Resumo:
The issue of using informative priors for estimation of mixtures at multiple time points is examined. Several different informative priors and an independent prior are compared using samples of actual and simulated aerosol particle size distribution (PSD) data. Measurements of aerosol PSDs refer to the concentration of aerosol particles in terms of their size, which is typically multimodal in nature and collected at frequent time intervals. The use of informative priors is found to better identify component parameters at each time point and more clearly establish patterns in the parameters over time. Some caveats to this finding are discussed.
Resumo:
Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones), rather than objective reality. Bayesian analysis is (arguably) a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.
Resumo:
In this article we examine how a consumer's susceptibility to informative influence (SII) affects the effectiveness of consumer testimonials in print advertising. More specifically, we show that consumers that are high in SII and that seek consumption-relevant information from other people are more influenced by the strength of the testimonial information than the strength of the attribute information. Conversely, consumers low in SII place greater emphasis on the strength of the attribute information when forming their evaluations. Our results show that consumer psychological traits can have an important impact on the acceptance of testimonial advertising. Theoretical and managerial implications of our findings are discussed.
Resumo:
President’s Report Hello fellow AITPM members, A few weeks ago we saw another example of all levels of Government pulling together in real time to try to deal with a major transport incident, this time it was container loads of ammonium nitrate falling off the Pacific Adventurer during Cyclone Hamish and the associated major oil spill due to piercing of its hull off Moreton Bay in southern Queensland. The oil spill was extensive, affecting beaches and estuaries from Moreton Island north to the Sunshine Coast; a coastal stretch of at least 60km. We saw the Queensland Government, Brisbane, Moreton Bay and Sunshine Coast Regional Council crews deployed quickly once the gravity of the situation was realised to clean up toxic oil on beaches and prevent extensive upstream contamination. Environmental agencies public and private were quick to respond to help affected wildlife. The Navy’s HMAS Yarra and another minesweeper were deployed to search for the containers in the coastal area in an effort to have them salvaged before all ammonium nitrate could leach into and harm marine habitat, which would have a substantial impact not only on that environment but also the fishing industry. all of this during the final fortnight before a State election.) While this could be branded as a maritime problem, the road transport and logistics system was crucial to the cleanup. The private vehicular ferries were enlisted to transport plant and equipment from Brisbane to Moreton Island. The plant themselves, such as graders, were drawn from road building and maintenance inventory. Hundreds of Councils’ staff were released from other activities to undertake the cleanup. While it will take some time for us to know the long term impacts of this incident, it seems difficult to fault “grassroots” government crews and their private counterparts, such as Island tourism staff, in the initial cleanup effort. From a traffic planning and management perspective, we should also remember that this sort of incident has happened on road and rail corridors in the past, albeit on lesser scales. It underlines that we do need to continue to protect communities, commercial interests, and the environment through rigorous heavy vehicle management, planning and management of dangerous goods routesincluding rail corridors through urban areas), and carefully considered incident and disaster recovery plans and protocols. I’d like to close in reminding everyone again that AITPM’s flagship event, the 2009 AITPM National Conference, Traffic Beyond Tomorrow, is being held in Adelaide from 5 to 7 August. SA Branch President Paul Morris informs me that we have had over 50 paper submissions to date, from which a very balanced and informative programme of sessions has been prepared. www.aitpm.com has all of the details about how to register, sponsor a booth, session, etc. Best regards all, Jon Bunker
Resumo:
Understanding users' capabilities, needs and expectations is key to the domain of Inclusive Design. Much of the work in the field could be informed and further strengthened by clear, valid and representative data covering the full range of people's capabilities. This article reviews existing data sets and identifies the challenges inherent in measuring capability in a manner that is informative for work in Inclusive Design. The need for a design-relevant capability data set is identified and consideration is given to a variety of capability construct operationalisation issues including questions associated with self-report and performance measures, sampling and the appropriate granularity of measures. The need for further experimental work is identified and a programme of research designed to culminate in the design of a valid and reliable capability survey is described.
Resumo:
This paper describes an initiative in the Faculty of Health at the Queensland University of Technology, Australia, where a short writing task was introduced to first year undergraduates in four courses including Public Health, Nursing, Social Work and Human Services, and Human Movement Studies. Over 1,000 students were involved in the trial. The task was assessed using an adaptation of the MASUS Procedure (Measuring the Academic Skills of University Students) (Webb & Bonanno, 1994). Feedback to the students including MASUS scores then enabled students to be directed to developmental workshops targeting their academic literacy needs. Students who achieved below the benchmark score were required to attend academic writing workshops in order to obtain the same summative 10% that was obtained by those who had achieved above the benchmark score. The trial was very informative, in terms of determining task appropriateness and timing, student feedback, student use of support, and student perceptions of the task and follow-up workshops. What we learned from the trial will be presented with a view to further refinement of this initiative.
Resumo:
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.
Resumo:
The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.