911 resultados para Visual data exploration
Resumo:
This article reports on the development of an iPhone-based brain-exercise tool for seniors involving a series of focus groups (FGs) and field trials (FTs). Four FGs with 34 participants were conducted aimed at understanding the underlying motivational and de-motivational factors influencing seniors’ engagement with mobile brain-exercise software. As part of the FGs, participants had approximately 40 minutes hands-on experience with commercially available brain-exercise software. A content analysis was conducted on the data resulting in a ranking of 19 motivational factors, of which the top three were challenge, usefulness and familiarity and 15 de-motivational factors, of which the top-three were usability issues, poor communication and games that were too fast. Findings were used to inform the design of three prototype brain-exercise games for the iPhone contained within one overall application, named Brain jog. Subsequently, two FTs were conducted using Brain jog to investigate the part that time exposure has to play in shaping the factors influencing engagement. New factors arose with respect to the initial FGs including the motivational factor feedback and the de-motivational factor boring. The results of this research provide valuable guidelines for the design and evaluation of mobile brain-exercise software for seniors.
Resumo:
Mineral exploration programmes around the world use data from remote sensing, geophysics and direct sampling. On a regional scale, the combination of airborne geophysics and ground-based geochemical sampling can aid geological mapping and economic minerals exploration. The fact that airborne geophysical and traditional soil-sampling data are generated at different spatial resolutions means that they are not immediately comparable due to their different sampling density. Several geostatistical techniques, including indicator cokriging and collocated cokriging, can be used to integrate different types of data into a geostatistical model. With increasing numbers of variables the inference of the cross-covariance model required for cokriging can be demanding in terms of effort and computational time. In this paper a Gaussian-based Bayesian updating approach is applied to integrate airborne radiometric data and ground-sampled geochemical soil data to maximise information generated from the soil survey, to enable more accurate geological interpretation for the exploration and development of natural resources. The Bayesian updating technique decomposes the collocated estimate into a production of two models: prior and likelihood models. The prior model is built from primary information and the likelihood model is built from secondary information. The prior model is then updated with the likelihood model to build the final model. The approach allows multiple secondary variables to be simultaneously integrated into the mapping of the primary variable. The Bayesian updating approach is demonstrated using a case study from Northern Ireland where the history of mineral prospecting for precious and base metals dates from the 18th century. Vein-hosted, strata-bound and volcanogenic occurrences of mineralisation are found. The geostatistical technique was used to improve the resolution of soil geochemistry, collected one sample per 2 km2, by integrating more closely measured airborne geophysical data from the GSNI Tellus Survey, measured over a footprint of 65 x 200 m. The directly measured geochemistry data were considered as primary data in the Bayesian approach and the airborne radiometric data were used as secondary data. The approach produced more detailed updated maps and in particular maximized information on mapped estimates of zinc, copper and lead. Greater delineation of an elongated northwest/southeast trending zone in the updated maps strengthened the potential to investigate stratabound base metal deposits.
Resumo:
PURPOSE. To investigate the methods used in contemporary ophthalmic literature to designate visual acuity (VA). METHODS. Papers in all 2005 editions of five ophthalmic journals were considered. Papers were included if (1) VA, vision, or visual function was mentioned in the abstract and (2) if the study involved age-related macular degeneration, cataract, or refractive surgery. If a paper was selected on the basis of its abstract, the full text of the paper was examined for information on the method of refractive correction during VA testing, type of chart used to measure VA, specifics concerning chart features, testing protocols, and data analysis and means of expressing VA in results. RESULTS. One hundred twenty-eight papers were included. The most common type of charts used were described as logMAR-based. Although most (89.8%) of the studies reported on the method of refractive correction during VA testing, only 58.6% gave the chart design, and less than 12% gave any information whatsoever on chart features or measurement procedures used. CONCLUSIONS. The methods used and the approach to analysis were rarely described in sufficient detail to allow others to replicate the study being reported. Sufficient detail should be given on VA measurement to enable others to duplicate the research. The authors suggest that charts adhering to Bailey-Lovie design principles always be used to measure vision in prospective studies and their use encouraged in clinical settings. The distinction between the terms logMAR, an acuity notation, and Bailey-Lovie or ETDRS as chart types should be adhered to more strictly. Copyright © Association for Research in Vision and Ophthalmology.
Resumo:
Background There is growing evidence linking early social and emotional wellbeing to later academic performance and various health outcomes including mental health. An economic evaluation was designed alongside the Roots of Empathy cluster-randomised trial evaluation, which is a school-based intervention for improving pupils’ social and emotional wellbeing. Exploration of the relevance of the Strengths and Diffi culties Questionnaire (SDQ) and Child Health Utility 9D (CHU9D) in school-based health economic evaluations is warranted. The SDQ is a behavioural screening questionnaire for 4–17-year-old children, consisting of a total diffi culties score, and also prosocial behaviour,
which aims to identify positive aspects of behaviour. The CHU9D is a generic preference-based health-related quality of life instrument for 7–17-year-old children.
Resumo:
Context and background
Historically nurses perceive politics and nursing as being at odds with the caring image, synonymous with nurses (Salvage, 1985). Furthermore the concept of the ‘politics of nursing’ lacks clear conceptual clarity (Hewison, 1994). This concept ranges across a continuum from political interest to participation or engagement (Rains et al, 2001). It is often argued political interest tends to be equated with knowledge/ involvement in health policy development and nurse education can foster political consciousness, through political socialization (Brown, 1996). But despite the World Health Organization (WHO, 2002) urging this involvement, nurses globally are largely absent from the political and policy making arena. What influences nurse’s political socialization and the development of a political consciousness is not clearly identified or known, although many commentators suggest the undergraduate educational environment, plays an important role (Hanley, 1987, Winter, 1991).
AIM
The aim of this study was to explore third year nursing student’s perceptions of politics in nursing, in the context of Northern Ireland. A number of hypotheses were tested examining the relationship between age, prior educational attainment and political interest and attitudes.
Research methodology
A cross sectional research design was used and the data was collected using a short anonymous self-completion web survey (Bryman, 2012). The sample was a convenience sample of one cohort of final year adult nursing students (n154) in one Northern Irish university, with a 42% response rate. Data was analyzed using SPSS.
Key findings and conclusions
The results revealed 55% of students were very/fairly interested in politics, with 6% reporting no interest in politics. 85% of students were registered to vote, but only 48% voted in the 2010 N Ireland Assembly election.
Recommend inclusion of a unit of study incorporating innovative teaching methods related to politics and health related policy, in the undergraduate nursing programme.
Resumo:
Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.
Resumo:
This paper explores my experiments with computer animated notation. It examines how I turned to
computer animated notation to address issues with static musical notation. In particular looking at
the work of Nancarrow, Cage, Tenney, and how a number of these composers' approaches
presented difficult challenges for traditional musical notation. I then discuss how computer
animated notation can provide some interesting solutions to the notational problems provoked in
these works.
In the second part of the paper I investigate how addressing these notational challenges has led to
new prespectives on the compositional process and has introduced new considerations into my
compositional practice including time as musical material, real-time and multi-nodal interaction
with the score, networked score environments with the possibility of physically distributed
performance, performer feedback and communication, and interaction between notation and other
media including visual media and movement.
Resumo:
In this paper, we propose a new learning approach to Web data annotation, where a support vector machine-based multiclass classifier is trained to assign labels to data items. For data record extraction, a data section re-segmentation algorithm based on visual and content features is introduced to improve the performance of Web data record extraction. We have implemented the proposed approach and tested it with a large set of Web query result pages in different domains. Our experimental results show that our proposed approach is highly effective and efficient.
Resumo:
The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.
Resumo:
We consider an application scenario where points of interest (PoIs) each have a web presence and where a web user wants to iden- tify a region that contains relevant PoIs that are relevant to a set of keywords, e.g., in preparation for deciding where to go to conve- niently explore the PoIs. Motivated by this, we propose the length- constrained maximum-sum region (LCMSR) query that returns a spatial-network region that is located within a general region of in- terest, that does not exceed a given size constraint, and that best matches query keywords. Such a query maximizes the total weight of the PoIs in it w.r.t. the query keywords. We show that it is NP- hard to answer this query. We develop an approximation algorithm with a (5 + ǫ) approximation ratio utilizing a technique that scales node weights into integers. We also propose a more efficient heuris- tic algorithm and a greedy algorithm. Empirical studies on real data offer detailed insight into the accuracy of the proposed algorithms and show that the proposed algorithms are capable of computingresults efficiently and effectively.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
A previous review of research on the practice of offender supervision identified the predominant use of interview-based methodologies and limited use of other research approaches (Robinson and Svensson, 2013). It also found that most research has tended to be locally focussed (i.e. limited to one jurisdiction) with very few comparative studies. This article reports on the application of a visual method in a small-scale comparative study. Practitioners in five European countries participated and took photographs of the places and spaces where offender supervision occurs. The aims of the study were two-fold: firstly to explore the utility of a visual approach in a comparative context; and secondly to provide an initial visual account of the environment in which offender supervision takes place. In this article we address the first of these aims. We describe the application of the method in some depth before addressing its strengths and weaknesses. We conclude that visual methods provide a useful tool for capturing data about the environments in which offender supervision takes place and potentially provide a basis for more normative explorations about the practices of offender supervision in comparative contexts.
Resumo:
Background: Spatially localized duration compression of a briefly presented moving stimulus following adaptation in the same location is taken as evidence for modality-specific neural timing mechanisms.
Aims: The present study used random dot motion stimuli to investigate where these mechanisms may be located.
Method: Experiment 1 measured duration compression of the test stimulus as a function of adaptor speed and revealed that duration compression is speed tuned. These data were then used to make predictions of duration compression responses for various models which were tested in experiment 2. Here a mixed-speed adaptor stimulus was used with duration compression being measured as a function of the adaptor’s ‘speed notch’ (the removal of a central band from the speed range).
Results: The results were consistent with a local-mean model.
Conclusions: Local-motion mechanisms are involved in duration perception of brief events.
Resumo:
An outlier removal based data cleaning technique is proposed to
clean manually pre-segmented human skin data in colour images.
The 3-dimensional colour data is projected onto three 2-dimensional
planes, from which outliers are removed. The cleaned 2 dimensional
data projections are merged to yield a 3D clean RGB data. This data
is finally used to build a look up table and a single Gaussian classifier
for the purpose of human skin detection in colour images.
Resumo:
OBJECTIVES:
To compare methods to estimate the incidence of visual field progression used by 3 large randomized trials of glaucoma treatment by applying these methods to a common data set of annually obtained visual field measurements of patients with glaucoma followed up for an average of 6 years.
METHODS:
The methods used by the Advanced Glaucoma Intervention Study (AGIS), the Collaborative Initial Glaucoma Treatment Study (CIGTS), and the Early Manifest Glaucoma Treatment study (EMGT) were applied to 67 eyes of 56 patients with glaucoma enrolled in a 10-year natural history study of glaucoma using Program 30-2 of the Humphrey Field Analyzer (Humphrey Instruments, San Leandro, Calif). The incidence of apparent visual field progression was estimated for each method. Extent of agreement between the methods was calculated, and time to apparent progression was compared.
RESULTS:
The proportion of patients progressing was 11%, 22%, and 23% with AGIS, CIGTS, and EMGT methods, respectively. Clinical assessment identified 23% of patients who progressed, but only half of these were also identified by CIGTS or EMGT methods. The CIGTS and the EMGT had comparable incidence rates, but only half of those identified by 1 method were also identified by the other.
CONCLUSIONS:
The EMGT and CIGTS methods produced rates of apparent progression that were twice those of the AGIS method. Although EMGT, CIGTS, and clinical assessment rates were comparable, they did not identify the same patients as having had field progression.