930 resultados para false negative rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Staphylococcus aureus genotype B (GTB) is a contagious mastitis pathogen in cattle, occurring in up to 87% of individuals. Because treatment is generally insufficient, culling is often required, leading to large economic loss in the Swiss dairy industry. As the detection of this pathogen in bulk tank milk (BTM) would greatly facilitate its control, a novel real-time quantitative PCR-based assay for BTM has previously been developed and is now being evaluated for its diagnostic properties at the herd level. Herds were initially classified as to their Staph. aureus GTB status by a reference method. Using BTM and herd pools of single-quarter and 4-quarter milk, the herds were then grouped by the novel assay, and the resulting classifications were compared. A total of 54 dairy herds were evaluated. Using the reference method, 21 herds were found to be GTB positive, whereas 33 were found to be negative. Considering the novel assay using both herd pools, all herds were grouped correctly, resulting in maximal diagnostic sensitivities (100%) and specificities (100%). For BTM samples, diagnostic sensitivities and specificities were 90 and 100%, respectively. Two herds were false negative in BTM, because cows with clinical signs of mastitis were not milked into the tank. Besides its excellent diagnostic properties, the assay is characterized by its low detection level, high efficiency, and its suitability for automation. Using the novel knowledge and assay, eradication of Staph. aureus GTB from a dairy herd may be considered as a realistic goal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Systematic reviews and meta-analyses allow for a more transparent and objective appraisal of the evidence. They may decrease the number of false-negative results and prevent delays in the introduction of effective interventions into clinical practice. However, as for any other tool, their misuse can result in severely misleading results. In this article, we discuss the main steps that should be taken when conducting systematic reviews and meta-analyses, namely the preparation of a review protocol, identification of eligible trials, and data extraction, pooling of treatment effects across trials, investigation of potential reasons for differences in treatment effects across trials, and complete reporting of the review methods and findings. We also discuss common pitfalls that should be avoided, including the use of quality assessment tools to derive summary quality scores, pooling of data across trials as if they belonged to a single large trial, and inappropriate uses of meta-regression that could result in misleading estimates of treatment effects because of regression to the mean or the ecological fallacy. If conducted and reported properly, systematic reviews and meta-analyses will increase our understanding of the strengths and weaknesses of the available evidence, which may eventually facilitate clinical decision making.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND The aim of this study was to evaluate imaging-based response to standardized neoadjuvant chemotherapy (NACT) regimen by dynamic contrast-enhanced magnetic resonance mammography (DCE-MRM), whereas MR images were analyzed by an automatic computer-assisted diagnosis (CAD) system in comparison to visual evaluation. MRI findings were correlated with histopathologic response to NACT and also with the occurrence of metastases in a follow-up analysis. PATIENTS AND METHODS Fifty-four patients with invasive ductal breast carcinomas received two identical MRI examinations (before and after NACT; 1.5T, contrast medium gadoteric acid). Pre-therapeutic images were compared with post-therapeutic examinations by CAD and two blinded human observers, considering morphologic and dynamic MRI parameters as well as tumor size measurements. Imaging-assessed response to NACT was compared with histopathologically verified response. All clinical, histopathologic, and DCE-MRM parameters were correlated with the occurrence of distant metastases. RESULTS Initial and post-initial dynamic parameters significantly changed between pre- and post-therapeutic DCE-MRM. Visually evaluated DCE-MRM revealed sensitivity of 85.7%, specificity of 91.7%, and diagnostic accuracy of 87.0% in evaluating the response to NACT compared to histopathology. CAD analysis led to more false-negative findings (37.0%) compared to visual evaluation (11.1%), resulting in sensitivity of 52.4%, specificity of 100.0%, and diagnostic accuracy of 63.0%. The following dynamic MRI parameters showed significant associations to occurring metastases: Post-initial curve type before NACT (entire lesions, calculated by CAD) and post-initial curve type of the most enhancing tumor parts after NACT (calculated by CAD and manually). CONCLUSIONS In the accurate evaluation of response to neoadjuvant treatment, CAD systems can provide useful additional information due to the high specificity; however, they cannot replace visual imaging evaluation. Besides traditional prognostic factors, contrast medium-induced dynamic MRI parameters reveal significant associations to patient outcome, i.e. occurrence of distant metastases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: Schizophrenia patients frequently suffer from complex motor abnormalities including fine and gross motor disturbances, abnormal involuntary movements, neurological soft signs and parkinsonism. These symptoms occur early in the course of the disease, continue in chronic patients and may deteriorate with antipsychotic medication. Furthermore gesture performance is impaired in patients, including the pantomime of tool use. Whether schizophrenia patients would show difficulties of actual tool use has not yet been investigated. Human tool use is complex and relies on a network of distinct and distant brain areas. We therefore aim to test if schizophrenia patients had difficulties in tool use and to assess associations with structural brain imaging using voxel based morphometry (VBM) and tract based spatial statistics (TBSS). Methode: In total, 44 patients with schizophrenia (DSM-5 criteria; 59% men, mean age 38) underwent structural MR imaging and performed the Tool-Use test. The test examines the use of a scoop and a hammer in three conditions: pantomime (without the tool), demonstration (with the tool) and actual use (with a recipient object). T1-weighted images were processed using SPM8 and DTI-data using FSL TBSS routines. To assess structural alterations of impaired tool use we first compared gray matter (GM) volume in VBM and white matter (WM) integrity in TBSS data of patients with and without difficulties of actual tool use. Next we explored correlations of Tool use scores and VBM and TBSS data. Group comparisons were family wise error corrected for multiple tests. Correlations were uncorrected (p < 0.001) with a minimum cluster threshold of 17 voxels (equivalent to a map-wise false positive rate of alpha < 0.0001 using a Monte Carlo procedure). Results: Tool use was impaired in schizophrenia (43.2% pantomime, 11.6% demonstration, 11.6% use). Impairment was related to reduced GM volume and WM integrity. Whole brain analyses detected an effect in the SMA in group analysis. Correlations of tool use scores and brain structure revealed alterations in brain areas of the dorso-dorsal pathway (superior occipital gyrus, superior parietal lobule, and dorsal premotor area) and the ventro-dorsal pathways (middle occipital gyrus, inferior parietal lobule) the action network, as well as the insula and the left hippocampus. Furthermore, significant correlations within connecting fiber tracts - particularly alterations within the bilateral corona radiata superior and anterior as well as the corpus callosum -were associated with Tool use performance. Conclusions: Tool use performance was impaired in schizophrenia, which was associated with reduced GM volume in the action network. Our results are in line with reports of impaired tool use in patients with brain lesions particularly of the dorso-dorsal and ventro-dorsal stream of the action network. In addition an effect of tool use on WM integrity was shown within fiber tracts connecting regions important for planning and executing tool use. Furthermore, hippocampus is part of a brain system responsible for spatial memory and navigation.The results suggest that structural brain alterations in the common praxis network contribute to impaired tool use in schizophrenia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Hepatitis B viruses (HBV) harboring mutations in the a-determinant of the Hepatitis B surface antigen (HBsAg) are associated with reduced reactivity of HBsAg assays. OBJECTIVES To evaluate the sensitivity and specificity of three HBsAg point-of-care tests for the detection of HBsAg of viruses harboring HBsAg mutations. STUDY DESIGN A selection of 50 clinical plasma samples containing HBV with HBsAg mutations was used to evaluate the performance of three HBsAg point-of-care tests (Vikia(®), bioMérieux, Marcy-L'Étoile, France. Alere Determine HBsAg™, Iverness Biomedical Innovations, Köln, Germany. Quick Profile™, LumiQuick Diagnostics, California, USA) and compared to the ARCHITECT HBsAg Qualitative(®) assay (Abbott Laboratories, Sligo, Ireland). RESULTS The sensitivity of the point-of-care tests ranged from 98% to 100%. The only false-negative result occurred using the Quick Profile™ assay with a virus harboring a D144A mutation. CONCLUSIONS The evaluated point-of-care tests revealed an excellent sensitivity in detecting HBV samples harboring HBsAg mutations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The head impulse test (HIT) can identify a deficient vestibulo-ocular reflex (VOR) by the compensatory saccade (CS) generated once the head stops moving. The inward HIT is considered safer than the outward HIT, yet might have an oculomotor advantage given that the subject would presumably know the direction of head rotation. Here, we compare CS latencies following inward (presumed predictable) and outward (more unpredictable) HITs after acute unilateral vestibular nerve deafferentation. Seven patients received inward and outward HITs delivered at six consecutive postoperative days (POD) and again at POD 30. All head impulses were recorded by portable video-oculography. CS included those occurring during (covert) or after (overt) head rotation. Inward HITs included mean CS latencies (183.48 ms ± 4.47 SE) that were consistently shorter than those generated during outward HITs in the first 6 POD (p = 0.0033). Inward HITs induced more covert saccades compared to outward HITs, acutely. However, by POD 30 there were no longer any differences in latencies or proportions of CS and direction of head rotation. Patients with acute unilateral vestibular loss likely use predictive cues of head direction to elicit early CS to keep the image centered on the fovea. In acute vestibular hypofunction, inwardly applied HITs may risk a preponderance of covert saccades, yet this difference largely disappears within 30 days. Advantages of inwardly applied HITs are discussed and must be balanced against the risk of a false-negative HIT interpretation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The difficulty of detecting differential gene expression in microarray data has existed for many years. Several correction procedures try to avoid the family-wise error rate in multiple comparison process, including the Bonferroni and Sidak single-step p-value adjustments, Holm's step-down correction method, and Benjamini and Hochberg's false discovery rate (FDR) correction procedure. Each multiple comparison technique has its advantages and weaknesses. We studied each multiple comparison method through numerical studies (simulations) and applied the methods to the real exploratory DNA microarray data, which detect of molecular signatures in papillary thyroid cancer (PTC) patients. According to our results of simulation studies, Benjamini and Hochberg step-up FDR controlling procedure is the best process among these multiple comparison methods and we discovered 1277 potential biomarkers among 54675 probe sets after applying the Benjamini and Hochberg's method to PTC microarray data.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

SNP genotyping arrays have been developed to characterize single-nucleotide polymorphisms (SNPs) and DNA copy number variations (CNVs). The quality of the inferences about copy number can be affected by many factors including batch effects, DNA sample preparation, signal processing, and analytical approach. Nonparametric and model-based statistical algorithms have been developed to detect CNVs from SNP genotyping data. However, these algorithms lack specificity to detect small CNVs due to the high false positive rate when calling CNVs based on the intensity values. Association tests based on detected CNVs therefore lack power even if the CNVs affecting disease risk are common. In this research, by combining an existing Hidden Markov Model (HMM) and the logistic regression model, a new genome-wide logistic regression algorithm was developed to detect CNV associations with diseases. We showed that the new algorithm is more sensitive and can be more powerful in detecting CNV associations with diseases than an existing popular algorithm, especially when the CNV association signal is weak and a limited number of SNPs are located in the CNV.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this study was to evaluate the adequacy of computerized vital records in Texas for conducting etiologic studies on neural tube defects (NTDs), using the revised and expanded National Centers for Health Statistics vital record forms introduced in Texas in 1989.^ Cases of NTDs (anencephaly and spina bifida) among Harris County (Houston) residents were identified from the computerized birth and death records for 1989-1991. The validity of the system was then measured against cases ascertained independently through medical records and death certificates. The computerized system performed poorly in its identification of NTDs, particularly for anencephaly, where the false positive rate was 80% with little or no improvement over the 3-year period. For both NTDs the sensitivity and predictive value positive of the tapes were somewhat higher for Hispanic than non-Hispanic mothers.^ Case control studies were conducted utilizing the tape set and the independently verified data set, using controls selected from the live birth tapes. Findings varied widely between the data sets. For example, the anencephaly odds ratio for Hispanic mothers (vs. non-Hispanic) was 1.91 (CI = 1.38-2.65) for the tape file, but 3.18 (CI = 1.81-5.58) for verified records. The odds ratio for diabetes was elevated for the tape set (OR = 3.33, CI = 1.67-6.66) but not for verified cases (OR = 1.09, CI = 0.24-4.96), among whom few mothers were diabetic. It was concluded that computerized tapes should not be solely relied on for NTD studies.^ Using the verified cases, Hispanic mother was associated with spina bifida, and Hispanic mother, teen mother, and previous pregnancy terminations were associated with anencephaly. Mother's birthplace, education, parity, and diabetes were not significant for either NTD.^ Stratified analyses revealed several notable examples of statistical interaction. For anencephaly, strong interaction was observed between Hispanic origin and trimester of first prenatal care.^ The prevalence was 3.8 per 10,000 live births for anencephaly and 2.0 for spina bifida (5.8 per 10,000 births for the combined categories). ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Children who experience early pubertal development have an increased risk of developing cancer (breast, ovarian, and testicular), osteoporosis, insulin resistance, and obesity as adults. Early pubertal development has been associated with depression, aggressiveness, and increased sexual prowess. Possible explanations for the decline in age of pubertal onset include genetics, exposure to environmental toxins, better nutrition, and a reduction in childhood infections. In this study we (1) evaluated the association between 415 single nucleotide polymorphisms (SNPs) from hormonal pathways and early puberty, defined as menarche prior to age 12 in females and Tanner Stage 2 development prior to age 11 in males, and (2) measured endocrine hormone trajectories (estradiol, testosterone, and DHEAS) in relation to age, race, and Tanner Stage in a cohort of children from Project HeartBeat! At the end of the 4-year study, 193 females had onset of menarche and 121 males had pubertal staging at age 11. African American females had a younger mean age at menarche than Non-Hispanic White females. African American females and males had a lower mean age at each pubertal stage (1-5) than Non-Hispanic White females and males. African American females had higher mean BMI measures at each pubertal stage than Non-Hispanic White females. Of the 415 SNPs evaluated in females, 22 SNPs were associated with early menarche, when adjusted for race ( p<0.05), but none remained significant after adjusting for multiple testing by False Discovery Rate (p<0.00017). In males, 17 SNPs were associated with early pubertal development when adjusted for race (p<0.05), but none remained significant when adjusted for multiple testing (p<0.00017). ^ There were 4955 hormone measurements taken during the 4-year study period from 632 African American and Non-Hispanic White males and females. On average, African American females started and ended the pubertal process at a younger age than Non-Hispanic White females. The mean age of Tanner Stage 2 breast development in African American and Non-Hispanic White females was 9.7 (S.D.=0.8) and 10.2 (S.D.=1.1) years, respectively. There was a significant difference by race in mean age for each pubertal stage, except Tanner Stage 1 for pubic hair development. Both Estradiol and DHEAS levels in females varied significantly with age, but not by race. Estradiol and DHEAS levels increased from Tanner Stage 1 to Tanner Stage 5.^ African American males had a lower mean age at each Tanner Stage of development than Non-Hispanic White males. The mean age of Tanner Stage 2 genital development in African American and Non-Hispanic White males was 10.5 (S.D.=1.1) and 10.8 (S.D.=1.1) years, respectively, but this difference was not significant (p=0.11). Testosterone levels varied significantly with age and race. Non-Hispanic White males had higher levels of testosterone than African American males from Tanner Stage 1-4. Testosterone levels increased for both races from Tanner Stage 1 to Tanner Stage 5. Testosterone levels had the steepest increase from ages 11-15 for both races. DHEAS levels in males varied significantly with age, but not by race. DHEAS levels had the steepest increase from ages 14-17. ^ In conclusion, African American males and females experience pubertal onset at a younger age than Non-Hispanic White males and females, but in this study, we could not find a specific gene that explained the observed variation in age of pubertal onset. Future studies with larger study populations may provide a better understanding of the contribution of genes in early pubertal onset.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

My dissertation focuses on two aspects of RNA sequencing technology. The first is the methodology for modeling the overdispersion inherent in RNA-seq data for differential expression analysis. This aspect is addressed in three sections. The second aspect is the application of RNA-seq data to identify the CpG island methylator phenotype (CIMP) by integrating datasets of mRNA expression level and DNA methylation status. Section 1: The cost of DNA sequencing has reduced dramatically in the past decade. Consequently, genomic research increasingly depends on sequencing technology. However it remains elusive how the sequencing capacity influences the accuracy of mRNA expression measurement. We observe that accuracy improves along with the increasing sequencing depth. To model the overdispersion, we use the beta-binomial distribution with a new parameter indicating the dependency between overdispersion and sequencing depth. Our modified beta-binomial model performs better than the binomial or the pure beta-binomial model with a lower false discovery rate. Section 2: Although a number of methods have been proposed in order to accurately analyze differential RNA expression on the gene level, modeling on the base pair level is required. Here, we find that the overdispersion rate decreases as the sequencing depth increases on the base pair level. Also, we propose four models and compare them with each other. As expected, our beta binomial model with a dynamic overdispersion rate is shown to be superior. Section 3: We investigate biases in RNA-seq by exploring the measurement of the external control, spike-in RNA. This study is based on two datasets with spike-in controls obtained from a recent study. We observe an undiscovered bias in the measurement of the spike-in transcripts that arises from the influence of the sample transcripts in RNA-seq. Also, we find that this influence is related to the local sequence of the random hexamer that is used in priming. We suggest a model of the inequality between samples and to correct this type of bias. Section 4: The expression of a gene can be turned off when its promoter is highly methylated. Several studies have reported that a clear threshold effect exists in gene silencing that is mediated by DNA methylation. It is reasonable to assume the thresholds are specific for each gene. It is also intriguing to investigate genes that are largely controlled by DNA methylation. These genes are called “L-shaped” genes. We develop a method to determine the DNA methylation threshold and identify a new CIMP of BRCA. In conclusion, we provide a detailed understanding of the relationship between the overdispersion rate and sequencing depth. And we reveal a new bias in RNA-seq and provide a detailed understanding of the relationship between this new bias and the local sequence. Also we develop a powerful method to dichotomize methylation status and consequently we identify a new CIMP of breast cancer with a distinct classification of molecular characteristics and clinical features.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing demand of security oriented to mobile applications has raised the attention to biometrics, as a proper and suitable solution for providing secure environment to mobile devices. With this aim, this document presents a biometric system based on hand geometry oriented to mobile devices, involving a high degree of freedom in terms of illumination, hand rotation and distance to camera. The user takes a picture of their own hand in the free space, without requiring any flat surface to locate the hand, and without removals of rings, bracelets or watches. The proposed biometric system relies on an accurate segmentation procedure, able to isolate hands from any background; a feature extraction, invariant to orientation, illumination, distance to camera and background; and a user classification, based on k-Nearest Neighbor approach, able to provide an accurate results on individual identification. The proposed method has been evaluated with two own databases collected with a HTC mobile. First database contains 120 individuals, with 20 acquisitions of both hands. Second database is a synthetic database, containing 408000 images of hand samples in different backgrounds: tiles, grass, water, sand, soil and the like. The system is able to identify individuals properly with False Reject Rate of 5.78% and False Acceptance Rate of 0.089%, using 60 features (15 features per finger)