943 resultados para Localization accuracy metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an optimization-based framework to minimize the energy consumption in a sensor network when using an indoor localization system based on the combination of received signal strength (RSS) and pedestrian dead reckoning (PDR). The objective is to find the RSS localization frequency and the number of RSS measurements used at each localization round that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the localization result. The optimization approach leverages practical models to predict the localization error and the overall energy consumption for combined RSS-PDR localization systems. The performance of the proposed strategy is assessed through simulation, showing energy savings with respect to other approaches while guaranteeing a target accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a novel Radio Frequency Identification (RFID) system for accurate indoor localization. The system is composed of a standard Ultra High Frequency (UHF), ISO-18006C compliant RFID reader, a large set of standard passive RFID tags whose locations are known, and a newly developed tag-like RFID component that is attached to the items that need to be localized. The new semi-passive component, referred to as sensatag (sense-a-tag), has a dual functionality wherein it can sense the communication between the reader and standard tags which are in its proximity, and also communicate with the reader like standard tags using backscatter modulation. Based on the information conveyed by the sensatags to the reader, localization algorithms based on binary sensor principles can be developed. We present results from real measurements that show the accuracy of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-parametric belief propagation (NBP) is a well-known message passing method for cooperative localization in wireless networks. However, due to the over-counting problem in the networks with loops, NBP’s convergence is not guaranteed, and its estimates are typically less accurate. One solution for this problem is non-parametric generalized belief propagation based on junction tree. However, this method is intractable in large-scale networks due to the high-complexity of the junction tree formation, and the high-dimensionality of the particles. Therefore, in this article, we propose the non-parametric generalized belief propagation based on pseudo-junction tree (NGBP-PJT). The main difference comparing with the standard method is the formation of pseudo-junction tree, which represents the approximated junction tree based on thin graph. In addition, in order to decrease the number of high-dimensional particles, we use more informative importance density function, and reduce the dimensionality of the messages. As by-product, we also propose NBP based on thin graph (NBP-TG), a cheaper variant of NBP, which runs on the same graph as NGBP-PJT. According to our simulation and experimental results, NGBP-PJT method outperforms NBP and NBP-TG in terms of accuracy, computational, and communication cost in reasonably sized networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional kinematic analysis provides quantitative assessment of upper limb motion and is used as an outcome measure to evaluate movement disorders. The aim of the present study is to present a set of kinematic metrics for quantifying characteristics of movement performance and the functional status of the subject during the execution of the activity of daily living (ADL) of drinking from a glass. Then, the objective is to apply these metrics in healthy people and a population with cervical spinal cord injury (SCI), and to analyze the metrics ability to discriminate between healthy and pathologic people. 19 people participated in the study: 7 subjects with metameric level C6 tetraplegia, 4 subjects with metameric level C7 tetraplegia and 8 healthy subjects. The movement was recorded with a photogrammetry system. The ADL of drinking was divided into a series of clearly identifiable phases to facilitate analysis. Metrics describing the time of the reaching phase, the range of motion of the joints analyzed, and characteristics of movement performance such as the efficiency, accuracy and smoothness of the distal segment and inter-joint coordination were obtained. The performance of the drinking task was more variable in people with SCI compared to the control group in relation to the metrics measured. Reaching time was longer in SCI groups. The proposed metrics showed capability to discriminate between healthy and pathologic people. Relative deficits in efficiency were larger in SCI people than in controls. These metrics can provide useful information in a clinical setting about the quality of the movement performed by healthy and SCI people during functional activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology has been developed for the study of molecular recognition at the level of single events and for the localization of sites on biosurfaces, in combining force microscopy with molecular recognition by specific ligands. For this goal, a sensor was designed by covalently linking an antibody (anti-human serum albumin, polyclonal) via a flexible spacer to the tip of a force microscope. This sensor permitted detection of single antibody-antigen recognition events by force signals of unique shape with an unbinding force of 244 +/- 22 pN. Analysis revealed that observed unbinding forces originate from the dissociation of individual Fab fragments from a human serum albumin molecule. The two Fab fragments of the antibody were found to bind independently and with equal probability. The flexible linkage provided the antibody with a 6-nm dynamical reach for binding, rendering binding probability high, 0.5 for encounter times of 60 ms. This permitted fast and reliable detection of antigenic sites during lateral scans with a positional accuracy of 1.5 nm. It is indicated that this methodology has promise for characterizing rate constants and kinetics of molecular recognition complexes and for molecular mapping of biosurfaces such as membranes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Targeting peptides direct nascent proteins to their specific subcellular compartment. Knowledge of targeting signals enables informed drug design and reliable annotation of gene products. However, due to the low similarity of such sequences and the dynamical nature of the sorting process, the computational prediction of subcellular localization of proteins is challenging. Results: We contrast the use of feed forward models as employed by the popular TargetP/SignalP predictors with a sequence-biased recurrent network model. The models are evaluated in terms of performance at the residue level and at the sequence level, and demonstrate that recurrent networks improve the overall prediction performance. Compared to the original results reported for TargetP, an ensemble of the tested models increases the accuracy by 6 and 5% on non-plant and plant data, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study an area localization problem in large scale Underwater Wireless Sensor Networks (UWSNs). The limited bandwidth, the severely impaired channel and the cost of underwater equipment all makes the underwater localization problem very challenging. Exact localization is very difficult for UWSNs in deep underwater environment. We propose a Mobile DETs based efficient 3D multi-power Area Localization Scheme (3D-MALS) to address the challenging problem. In the proposed scheme, the ideas of 2D multi-power Area Localization Scheme(2D-ALS) [6] and utilizing Detachable Elevator Transceiver (DET) are used to achieve the simplicity, location accuracy, scalability and low cost performances. The DET can rise and down to broadcast its position. And it is assumed that all the underwater nodes underwater have pressure sensors and know their z coordinates. The simulation results show that our proposed scheme is very efficient. © 2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In terms of binary relations the author analyses the task of an individual consumers’ choice on the teaching excerpts set. It is suggested to analyse the function of consumer’s value as additive reduction. For localization of the vector of weighting coefficients of additive reduction the procedures based on metrics of object distance towards the ideal point are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main focus of this thesis is to address the relative localization problem of a heterogenous team which comprises of both ground and micro aerial vehicle robots. This team configuration allows to combine the advantages of increased accessibility and better perspective provided by aerial robots with the higher computational and sensory resources provided by the ground agents, to realize a cooperative multi robotic system suitable for hostile autonomous missions. However, in such a scenario, the strict constraints in flight time, sensor pay load, and computational capability of micro aerial vehicles limits the practical applicability of popular map-based localization schemes for GPS denied navigation. Therefore, the resource limited aerial platforms of this team demand simpler localization means for autonomous navigation. Relative localization is the process of estimating the formation of a robot team using the acquired inter-robot relative measurements. This allows the team members to know their relative formation even without a global localization reference, such as GPS or a map. Thus a typical robot team would benefit from a relative localization service since it would allow the team to implement formation control, collision avoidance, and supervisory control tasks, independent of a global localization service. More importantly, a heterogenous team such as ground robots and computationally constrained aerial vehicles would benefit from a relative localization service since it provides the crucial localization information required for autonomous operation of the weaker agents. This enables less capable robots to assume supportive roles and contribute to the more powerful robots executing the mission. Hence this study proposes a relative localization-based approach for ground and micro aerial vehicle cooperation, and develops inter-robot measurement, filtering, and distributed computing modules, necessary to realize the system. The research study results in three significant contributions. First, the work designs and validates a novel inter-robot relative measurement hardware solution which has accuracy, range, and scalability characteristics, necessary for relative localization. Second, the research work performs an analysis and design of a novel nonlinear filtering method, which allows the implementation of relative localization modules and attitude reference filters on low cost devices with optimal tuning parameters. Third, this work designs and validates a novel distributed relative localization approach, which harnesses the distributed computing capability of the team to minimize communication requirements, achieve consistent estimation, and enable efficient data correspondence within the network. The work validates the complete relative localization-based system through multiple indoor experiments and numerical simulations. The relative localization based navigation concept with its sensing, filtering, and distributed computing methods introduced in this thesis complements system limitations of a ground and micro aerial vehicle team, and also targets hostile environmental conditions. Thus the work constitutes an essential step towards realizing autonomous navigation of heterogenous teams in real world applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Intravoxel incoherent motion (IVIM) is an MRI technique with potential applications in measuring brain tumor perfusion, but its clinical impact remains to be determined. We assessed the usefulness of IVIM-metrics in predicting survival in newly diagnosed glioblastoma. METHODS: Fifteen patients with glioblastoma underwent MRI including spin-echo echo-planar DWI using 13 b-values ranging from 0 to 1000 s/mm2. Parametric maps for diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) were generated for contrast-enhancing regions (CER) and non-enhancing regions (NCER). Regions of interest were manually drawn in regions of maximum f and on the corresponding dynamic susceptibility contrast images. Prognostic factors were evaluated by Kaplan-Meier survival and Cox proportional hazards analyses. RESULTS: We found that fCER and D*CER correlated with rCBFCER. The best cutoffs for 6-month survival were fCER>9.86% and D*CER>21.712 x10-3mm2/s (100% sensitivity, 71.4% specificity, 100% and 80% positive predictive values, and 80% and 100% negative predictive values; AUC:0.893 and 0.857, respectively). Treatment yielded the highest hazard ratio (5.484; 95% CI: 1.162-25.88; AUC: 0.723; P = 0.031); fCER combined with treatment predicted survival with 100% accuracy. CONCLUSIONS: The IVIM-metrics fCER and D*CER are promising biomarkers of 6-month survival in newly diagnosed glioblastoma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alzheimer’s disease is the most common cause of dementia which causes a progressive and irreversible impairment of several cognitive functions. The aging population has been increasing significantly in recent decades and this disease affects mainly the elderly. Its diagnostic accuracy is relatively low and there is not a biomarker able to detect AD without invasive tests. Despite the progress in better understanding the disease there remains no prospect of cure at least in the near future. The electroencephalogram (EEG) test is a widely available technology in clinical settings. It may help diagnosis of brain disorders, once it can be used in patients who have cognitive impairment involving a general decrease in overall brain function or in patients with a located deficit. This study is a new approach to improve the scalp localization and the detection of brain anomalies (EEG temporal events) sources associated with AD by using the EEG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recommendation for Oxygen Measurements from Argo Floats: Implementation of In-Air-Measurement Routine to Assure Highest Long-term Accuracy As Argo has entered its second decade and chemical/biological sensor technology is improving constantly, the marine biogeochemistry community is starting to embrace the successful Argo float program. An augmentation of the global float observatory, however, has to follow rather stringent constraints regarding sensor characteristics as well as data processing and quality control routines. Owing to the fairly advanced state of oxygen sensor technology and the high scientific value of oceanic oxygen measurements (Gruber et al., 2010), an expansion of the Argo core mission to routine oxygen measurements is perhaps the most mature and promising candidate (Freeland et al., 2010). In this context, SCOR Working Group 142 “Quality Control Procedures for Oxygen and Other Biogeochemical Sensors on Floats and Gliders” (www.scor-int.org/SCOR_WGs_WG142.htm) set out in 2014 to assess the current status of biogeochemical sensor technology with particular emphasis on float-readiness, develop pre- and post-deployment quality control metrics and procedures for oxygen sensors, and to disseminate procedures widely to ensure rapid adoption in the community.