237 resultados para Subpixel precision
Resumo:
Purpose: Older adults have increased visual impairment, including refractive blur from presbyopic multifocal spectacle corrections, and are less able to extract visual information from the environment to plan and execute appropriate stepping actions; these factors may collectively contribute to their higher risk of falls. The aim of this study was to examine the effect of refractive blur and target visibility on the stepping accuracy and visuomotor stepping strategies of older adults during a precision stepping task. Methods: Ten healthy, visually normal older adults (mean age 69.4 ± 5.2 years) walked up and down a 20 m indoor corridor stepping onto selected high and low-contrast targets while viewing under three visual conditions: best-corrected vision, +2.00 DS and +3.00 DS blur; the order of blur conditions was randomised between participants. Stepping accuracy and gaze behaviours were recorded using an eyetracker and a secondary hand-held camera. Results: Older adults made significantly more stepping errors with increasing levels of blur, particularly exhibiting under-stepping (stepping more posteriorly) onto the targets (p<0.05), while visuomotor stepping strategies did not significantly alter. Stepping errors were also significantly greater for the low compared to the high contrast targets and differences in visuomotor stepping strategies were found, including increased duration of gaze and increased interval between gaze onset and initiation of the leg swing when stepping onto the low contrast targets. Conclusions: These findings highlight that stepping accuracy is reduced for low visibility targets, and for high levels of refractive blur at levels typically present in multifocal spectacle corrections, despite significant changes in some of the visuomotor stepping strategies. These findings highlight the importance of maximising the contrast of objects in the environment, and may help explain why older adults wearing multifocal spectacle corrections exhibit an increased risk of falling.
Resumo:
In a pilot application based on web search engine calledWeb-based Relation Completion (WebRC), we propose to join two columns of entities linked by a predefined relation by mining knowledge from the web through a web search engine. To achieve this, a novel retrieval task Relation Query Expansion (RelQE) is modelled: given an entity (query), the task is to retrieve documents containing entities in predefined relation to the given one. Solving this problem entails expanding the query before submitting it to a web search engine to ensure that mostly documents containing the linked entity are returned in the top K search results. In this paper, we propose a novel Learning-based Relevance Feedback (LRF) approach to solve this retrieval task. Expansion terms are learned from training pairs of entities linked by the predefined relation and applied to new entity-queries to find entities linked by the same relation. After describing the approach, we present experimental results on real-world web data collections, which show that the LRF approach always improves the precision of top-ranked search results to up to 8.6 times the baseline. Using LRF, WebRC also shows performances way above the baseline.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
Vision-based place recognition involves recognising familiar places despite changes in environmental conditions or camera viewpoint (pose). Existing training-free methods exhibit excellent invariance to either of these challenges, but not both simultaneously. In this paper, we present a technique for condition-invariant place recognition across large lateral platform pose variance for vehicles or robots travelling along routes. Our approach combines sideways facing cameras with a new multi-scale image comparison technique that generates synthetic views for input into the condition-invariant Sequence Matching Across Route Traversals (SMART) algorithm. We evaluate the system’s performance on multi-lane roads in two different environments across day-night cycles. In the extreme case of day-night place recognition across the entire width of a four-lane-plus-median-strip highway, we demonstrate performance of up to 44% recall at 100% precision, where current state-of-the-art fails.
Resumo:
Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-the-art performance on various classification tasks. In this paper, we present for the first time a place recognition technique based on CNN models, by combining the powerful features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70 km benchmark place recognition dataset we achieve a 75% increase in recall at 100% precision, significantly outperforming all previous state of the art techniques. We also conduct a comprehensive performance comparison of the utility of features from all 21 layers for place recognition, both for the benchmark dataset and for a second dataset with more significant viewpoint changes.
Resumo:
Electric distribution networks are now in the era of transition from passive to active distribution networks with the integration of energy storage devices. Optimal usage of batteries and voltage control devices along with other upgrades in network needs a distribution expansion planning (DEP) considering inter-temporal dependencies of stages. This paper presents an efficient approach for solving multi-stage distribution expansion planning problems (MSDEPP) based on a forward-backward approach considering energy storage devices such as batteries and voltage control devices such as voltage regulators and capacitors. The proposed algorithm is compared with three other techniques including full dynamic, forward fill-in, backward pull-out from the point of view of their precision and their computational efficiency. The simulation results for the IEEE 13 bus network show the proposed pseudo-dynamic forward-backward approach presents good efficiency in precision and time of optimization.
Resumo:
The Archean Hollandaire volcanogenic massive sulfide deposit is a felsic–siliciclastic VMS deposit located in the Murchison Domain of the Youanmi Terrane, Yilgarn Craton, Western Australia. It is hosted in a succession of turbidites, mudstones and coherent rhyodacite sills and has been metamorphosed to upper greenschist/lower amphibolite facies and includes a pervasive S1 deformational fabric. The coherent rhyodacitic sills are interpreted as syndepositional based on geochemical similarities with well-known VMS-associated felsic rocks and similar foliations to the metasediments. We offer several explanations for the absence of textural evidence (e.g. breccias) for syn-depositional origins: 1) the subaqueous sediments were dehydrated by long-lived magmatism such that no pore-water remained to drive quench fragmentation; 2) pore-space occlusion by burial and/or, 3) alteration overprinting and obscuring of primary breccias at contact margins. Mineralisation occurs by sub-seafloor replacement of original host rocks in two ore bodies, Hollandaire Main (~125 x >500 m and ~8 m thick) and Hollandaire West (~100 x 470 m and ~5 m thick), and occurs in three main textural styles, massive sulfides, which are exclusively hosted in turbidites and mudstones, and stringer and disseminated sulfides, which are also hosted in coherent rhyodacite. Most sulfides have textures consistent with remobilisation and recrystallisation. Hydrothermal metamorphism has altered the hangingwall and footwall to similar degrees, with significant gains in Mg, Mn and K and losses in Na, Ca and Sr. Garnet and staurolite porphyryoblasts also exhibit a footprint around mineralisation, extending up to 30 m both above and below the ore zone. High precision thermal ionisation mass spectrometry of zircons extracted from the coherent rhyodacite yield an age of 2759.5 ± 0.9 Ma, which along with geochemical comparisons, places the succession within the 2760–2735 Ma Greensleeves Formation of the Polelle Group of the Murchison Supergroup. Geochemical and geochronological evidence link the coherent rhyodacite sills to the Peter Well Granodiorite pluton ~2 km to the W, which acted as the heat engine driving hydrothermal circulation during VMS mineralisation. This study highlights the importance of both: detailed physical volcanological studies from which an accurate assessment of timing relationships, particularly the possibility of intrusions dismembering ore horizons, can be made; and identifying synvolcanic plutons and other similar suites, for VMS exploration targets in the Youanmi Terrane and worldwide.
Resumo:
Rendle-Short, Wilkinson, and Danby show how social interaction is directly relevant to maintaining friendships, mental health and well-being, and supportive peer relations. Using conversation analysis, the chapter focuses on conversational participants’ pursuit of affiliation and intimacy from a language as action perspective. It focuses on the use of derogatory naming practices by a 10-year-old girl diagnosed with Asperger’s Syndrome. The analysis shows how derogatory address terms, part of a wider pattern of behaviour evident in this child’s interaction, result in behaviour that might be thought of as impolite or lacking in restraint. It also illustrates how a single case study can draw attention to the context-specific nature of interaction when working with children with Asperger’s Syndrome. The chapter contributes to our understanding of the difficulty in pinpointing, with precision and with clear evidence, what counts as a ‘social interaction difficulty’ due to the context specific nature of interaction.
Resumo:
Experimental work could be conducted in either laboratory or at field site. Generally, the laboratory experiments are carried out in an artificial setting and with a highly controlled environment. By contrast, the field experiments often take place in a natural setting, subject to the influences of many uncontrolled factors. Therefore, it is necessary to carefully assess the possible limitations and appropriateness of an experiment before embarking on it. In this paper, a case study of field monitoring of the energy performance of air conditioners is presented. Significant challenges facing the experimental work are described. Lessons learnt from this case study are also discussed. In particular, it was found that on-going analysis of the monitoring data and the correction of abnormal issues are two of the keys for a successful field test program. It was also shown that the installation of monitoring systems could have a significant impact on the accuracy of the data being collected. Before monitoring system was set up to collect monitoring data, it is recommended that an initial analysis of sample monitored data should be conducted to make sure that the monitoring data can achieve the expected precision. In the case where inevitable inherent errors were induced from the installation of field monitoring systems, appropriate remediation may need to be developed and implemented for the improved accuracy of the estimation of results. On-going analysis of monitoring data and correction of any abnormal issues would be the key to a successful field test program.
Resumo:
The literature on “entrepreneurial opportunities” has grown rapidly since the publication of Shane and Venkataraman (2000). By directing attention to the earliest stages of development of new economic activities and organizations, this marks sound redirection of entrepreneurship research. However, our review shows that theoretical and empirical progress has been limited on important aspects of the role of “opportunities” and their interaction with actors, i.e., the “nexus”. We argue that this is rooted in inherent and inescapable problems with the “opportunity” construct itself, when applied in the context of a prospective, micro-level (i.e., individual[s], venture, or individual–venture dyad) view of entrepreneurial processes. We therefore suggest a fundamental re-conceptualization using the constructs External Enablers, New Venture Ideas, and Opportunity Confidence to capture the many important ideas commonly discussed under the “opportunity” label. This re-conceptualization makes important distinctions where prior conceptions have been blurred: between explananda and explanantia; between actor and the entity acted upon; between external conditions and subjective perceptions, and between the contents and the favorability of the entity acted upon. These distinctions facilitate theoretical precision and can guide empirical investigation towards more fruitful designs.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
Aims: We assessed the diagnostic performance of z-scores to define a significant delta cardiac troponin (cTn) in a cohort of patients with well-defined clinical outcomes. Methods: We calculated z-scores, which are dependent on the analytical precision and biological variation, to report changes in cTn. We compared the diagnostic performances of a relative delta (%Δ), actual delta (Δ), and z-scores in 762 emergency department patients with symptoms of suspected acute coronary syndrome. cTn was measured with sensitive cTnI (Beckman Coulter), highly sensitive cTnI (Abbott), and highly sensitive cTnT (Roche) assays. Results: Receiver operating characteristic analysis showed no statistically significant differences in the areas under the curve (AUC) of z-scores and Δ with both superior compared to %Δ for all three assays (p<0.001). The AUCs of z-scores measured with the Abbott hs-cTnI (0.955) and Roche hs-cTnT (0.922) assays were comparable to Beckman Coulter cTnI (0.933) (p=0.272 and 0.640, respectively). The individualized Δ cut-off values that were required to emulate a z-score of 1.96 were: Beckman Coulter cTnI 30 ng/l, Abbott hs-cTnI 20 ng/l, and Roche hs-cTnT 7 ng/l. Conclusions: z-scores allow the use of a single cut-off value at all cTn levels, for both cTnI and cTnT and for sensitive and highly sensitive assays, with comparable diagnostic performances. This strategy of reporting significant changes as z-scores may obviate the need for the empirical development of assay-specific cut-off rules to define significant troponin changes.
Resumo:
A photochemical strategy enabling λ-orthogonal reactions is introduced to construct macromolecular architectures and to encode variable functional groups with site-selective precision into a single molecule by the choice of wavelength. λ-Orthogonal pericyclic reactions proceed independently of one another by the selection of functional groups that absorb light of specific wavelengths. The power of the new concept is shown by a one-pot reaction of equimolar quantities of maleimide with two polymers carrying different maleimide-reactive endgroups, that is, a photoactive diene (photoenol) and a nitrile imine (tetrazole). Under selective irradiation at λ=310–350 nm, any maleimide (or activated ene) end-capped compound reacts exclusively with the photoenol functional polymer. After complete conversion of the photoenol, subsequent irradiation at λ=270–310 nm activates the reaction of the tetrazole group with functional enes. The versatility of the approach is shown by λ-orthogonal click reactions of complex maleimides, functional enes, and polymers to the central polymer scaffold.