949 resultados para mapping method
Resumo:
A method called "SymbolDesign" is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.
Resumo:
BACKGROUND: Arrhythmia recurrence after cardiac radiofrequency ablation (RFA) for atrial fibrillation has been linked to conduction through discontinuous lesion lines. Intraprocedural visualization and corrective ablation of lesion line discontinuities could decrease postprocedure atrial fibrillation recurrence. Intracardiac acoustic radiation force impulse (ARFI) imaging is a new imaging technique that visualizes RFA lesions by mapping the relative elasticity contrast between compliant-unablated and stiff RFA-treated myocardium. OBJECTIVE: To determine whether intraprocedure ARFI images can identify RFA-treated myocardium in vivo. METHODS: In 8 canines, an electroanatomical mapping-guided intracardiac echo catheter was used to acquire 2-dimensional ARFI images along right atrial ablation lines before and after RFA. ARFI images were acquired during diastole with the myocardium positioned at the ARFI focus (1.5 cm) and parallel to the intracardiac echo transducer for maximal and uniform energy delivery to the tissue. Three reviewers categorized each ARFI image as depicting no lesion, noncontiguous lesion, or contiguous lesion. For comparison, 3 separate reviewers confirmed RFA lesion presence and contiguity on the basis of functional conduction block at the imaging plane location on electroanatomical activation maps. RESULTS: Ten percent of ARFI images were discarded because of motion artifacts. Reviewers of the ARFI images detected RFA-treated sites with high sensitivity (95.7%) and specificity (91.5%). Reviewer identification of contiguous lesions had 75.3% specificity and 47.1% sensitivity. CONCLUSIONS: Intracardiac ARFI imaging was successful in identifying endocardial RFA treatment when specific imaging conditions were maintained. Further advances in ARFI imaging technology would facilitate a wider range of imaging opportunities for clinical lesion evaluation.
Resumo:
Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variableparameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.
Resumo:
Connectivity mapping is a recently developed technique for discovering the underlying connections between different biological states based on gene-expression similarities. The sscMap method has been shown to provide enhanced sensitivity in mapping meaningful connections leading to testable biological hypotheses and in identifying drug candidates with particular pharmacological and/or toxicological properties. Challenges remain, however, as to how to prioritise the large number of discovered connections in an unbiased manner such that the success rate of any following-up investigation can be maximised. We introduce a new concept, gene-signature perturbation, which aims to test whether an identified connection is stable enough against systematic minor changes (perturbation) to the gene-signature. We applied the perturbation method to three independent datasets obtained from the GEO database: acute myeloid leukemia (AML), cervical cancer, and breast cancer treated with letrozole. We demonstrate that the perturbation approach helps to identify meaningful biological connections which suggest the most relevant candidate drugs. In the case of AML, we found that the prevalent compounds were retinoic acids and PPAR activators. For cervical cancer, our results suggested that potential drugs are likely to involve the EGFR pathway; and with the breast cancer dataset, we identified candidates that are involved in prostaglandin inhibition. Thus the gene-signature perturbation approach added real values to the whole connectivity mapping process, allowing for increased specificity in the identification of possible therapeutic candidates.
Resumo:
Background: Tissue MicroArrays (TMAs) represent a potential high-throughput platform for the analysis and discovery of tissue biomarkers. As TMA slides are produced manually and subject to processing and sectioning artefacts, the layout of TMA cores on the final slide and subsequent digital scan (TMA digital slide) is often disturbed making it difficult to associate cores with their original position in the planned TMA map. Additionally, the individual cores can be greatly altered and contain numerous irregularities such as missing cores, grid rotation and stretching. These factors demand the development of a robust method for de-arraying TMAs which identifies each TMA core, and assigns them to their appropriate coordinates on the constructed TMA slide.
Methodology: This study presents a robust TMA de-arraying method consisting of three functional phases: TMA core segmentation, gridding and mapping. The segmentation of TMA cores uses a set of morphological operations to identify each TMA core. Gridding then utilises a Delaunay Triangulation based method to find the row and column indices of each TMA core. Finally, mapping correlates each TMA core from a high resolution TMA whole slide image with its name within a TMAMap.
Conclusion: This study describes a genuine robust TMA de-arraying algorithm for the rapid identification of TMA cores from digital slides. The result of this de-arraying algorithm allows the easy partition of each TMA core for further processing. Based on a test group of 19 TMA slides (3129 cores), 99.84% of cores were segmented successfully, 99.81% of cores were gridded correctly and 99.96% of cores were mapped with their correct names via TMAMaps. The gridding of TMA cores were also extensively tested using a set of 113 pseudo slide (13,536 cores) with a variety of irregular grid layouts including missing cores, rotation and stretching. 100% of the cores were gridded correctly.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
This paper presents an Invariant Information Local Sub-map Filter (IILSF) as a technique for consistent Simultaneous Localisation and Mapping (SLAM) in a large environment. It harnesses the benefits of sub-map technique to improve the consistency and efficiency of Extended Kalman Filter (EKF) based SLAM. The IILSF makes use of invariant information obtained from estimated locations of features in independent sub-maps, instead of incorporating every observation directly into the global map. Then the global map is updated at regular intervals. Applying this technique to the EKF based SLAM algorithm: (a) reduces the computational complexity of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. Simulation results show that the method was able to accurately fuse local map observations to generate an efficient and consistent global map, in addition to significantly reducing computational cost and data association ambiguities.
Resumo:
A technique for optimizing the efficiency of the sub-map method for large-scale simultaneous localization and mapping (SLAM) is proposed. It optimizes the benefits of the sub-map technique to improve the accuracy and consistency of an extended Kalman filter (EKF)-based SLAM. Error models were developed and engaged to investigate some of the outstanding issues in employing the sub-map technique in SLAM. Such issues include the size (distance) of an optimal sub-map, the acceptable error effect caused by the process noise covariance on the predictions and estimations made within a sub-map, when to terminate an existing sub-map and start a new one and the magnitude of the process noise covariance that could produce such an effect. Numerical results obtained from the study and an error-correcting process were engaged to optimize the accuracy and convergence of the Invariant Information Local Sub-map Filter previously proposed. Applying this technique to the EKF-based SLAM algorithm (a) reduces the computational burden of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. A Monte Carlo analysis of the system is presented as a means of demonstrating the consistency and efficacy of the proposed technique.
Resumo:
In recent years, sonification of movement has emerged as a viable method for the provision of feedback in motor learning. Despite some experimental validation of its utility, controlled trials to test the usefulness of sonification in a motor learning context are still rare. As such, there are no accepted conventions for dealing with its implementation. This article addresses the question of how continuous movement information should be best presented as sound to be fed back to the learner. It is proposed that to establish effective approaches to using sonification in this context, consideration must be given to the processes that underlie motor learning, in particular the nature of the perceptual information available to the learner for performing the task at hand. Although sonification has much potential in movement performance enhancement, this potential is largely unrealised as of yet, in part due to the lack of a clear framework for sonification mapping: the relationship between movement and sound. By grounding mapping decisions in a firmer understanding of how perceptual information guides learning, and an embodied cognition stance in general, it is hoped that greater advances in use of sonification to enhance motor learning can be achieved.
Resumo:
BACKGROUND: While the discovery of new drugs is a complex, lengthy and costly process, identifying new uses for existing drugs is a cost-effective approach to therapeutic discovery. Connectivity mapping integrates gene expression profiling with advanced algorithms to connect genes, diseases and small molecule compounds and has been applied in a large number of studies to identify potential drugs, particularly to facilitate drug repurposing. Colorectal cancer (CRC) is a commonly diagnosed cancer with high mortality rates, presenting a worldwide health problem. With the advancement of high throughput omics technologies, a number of large scale gene expression profiling studies have been conducted on CRCs, providing multiple datasets in gene expression data repositories. In this work, we systematically apply gene expression connectivity mapping to multiple CRC datasets to identify candidate therapeutics to this disease.
RESULTS: We developed a robust method to compile a combined gene signature for colorectal cancer across multiple datasets. Connectivity mapping analysis with this signature of 148 genes identified 10 candidate compounds, including irinotecan and etoposide, which are chemotherapy drugs currently used to treat CRCs. These results indicate that we have discovered high quality connections between the CRC disease state and the candidate compounds, and that the gene signature we created may be used as a potential therapeutic target in treating the disease. The method we proposed is highly effective in generating quality gene signature through multiple datasets; the publication of the combined CRC gene signature and the list of candidate compounds from this work will benefit both cancer and systems biology research communities for further development and investigations.
Resumo:
Chromatin immunoprecipitation (ChIP) allows enrichment of genomic regions which are associated with specific transcription factors, histone modifications, and indeed any other epitopes which are present on chromatin. The original ChIP methods used site-specific PCR and Southern blotting to confirm which regions of the genome were enriched, on a candidate basis. The combination of ChIP with genomic tiling arrays (ChIP-chip) allowed a more unbiased approach to map ChIP-enriched sites. However, limitations of microarray probe design and probe number have a detrimental impact on the coverage, resolution, sensitivity, and cost of whole-genome tiling microarray sets for higher eukaryotes with large genomes. The combination of ChIP with high-throughput sequencing technology has allowed more comprehensive surveys of genome occupancy, greater resolution, and lower cost for whole genome coverage. Herein, we provide a comparison of high-throughput sequencing platforms and a survey of ChIP-seq analysis tools, discuss experimental design, and describe a detailed ChIP-seq method.Chromatin immunoprecipitation (ChIP) allows enrichment of genomic regions which are associated with specific transcription factors, histone modifications, and indeed any other epitopes which are present on chromatin. The original ChIP methods used site-specific PCR and Southern blotting to confirm which regions of the genome were enriched, on a candidate basis. The combination of ChIP with genomic tiling arrays (ChIP-chip) allowed a more unbiased approach to map ChIP-enriched sites. However, limitations of microarray probe design and probe number have a detrimental impact on the coverage, resolution, sensitivity, and cost of whole-genome tiling microarray sets for higher eukaryotes with large genomes. The combination of ChIP with high-throughput sequencing technology has allowed more comprehensive surveys of genome occupancy, greater resolution, and lower cost for whole genome coverage. Herein, we provide a comparison of high-throughput sequencing platforms and a survey of ChIP-seq analysis tools, discuss experimental design, and describe a detailed ChIP-seq method.
Resumo:
Purpose
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions.
Methods
Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms.
Results
Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm.
Conclusion
Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
Resumo:
This paper proposes a design method for the realisation of circularly polarised frequency selective surfaces (CP FSS). An equivalent circuit model for a capacitive asymmetric loop FSS is proposed. For this model a set of nonlinear design equation for CP operation is obtained. Based on space mapping of the model and full-wave simulation, a fast converging design method for CP FSS synthesis is demonstrated for the first time.
Resumo:
Most simultaneous localisation and mapping (SLAM) solutions were developed for navigation of non-cognitive robots. By using a variety of sensors, the distances to walls and other objects are determined, which are then used to generate a map of the environment and to update the robot’s position. When developing a cognitive robot, such a solution is not appropriate since it requires accurate sensors and precise odometry, also lacking fundamental features of cognition such as time and memory. In this paper we present a SLAM solution in which such features are taken into account and integrated. Moreover, this method does not require precise odometry nor accurate ranging sensors.
Resumo:
Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders.