888 resultados para Multiple scales methods
Resumo:
Objective: To evaluate frequency, anatomic presentation, and quantities of supernumerary parathyroids glands in patients with primary hyperparathyroidism (HPT1) associated with multiple endocrine neoplasia type 1 (MEN1), as well as the importance of thymectomy, and the benefits of localizing examinations for those glands. Methods: Forty-one patients with hyperparathyroidism associated with MEN1 who underwent parathyroidectomy between 1997 and 2007 were retrospectively studied. The location and number of supernumerary parathyroids were reviewed, as well as whether cervical ultrasound and parathyroid SESTAMIBI scan (MIBI) were useful diagnostic tools. Results: In five patients (12.2%) a supernumerary gland was identified. In three of these cases (40%), the glands were near the thyroid gland and were found during the procedure. None of the imaging examinations were able to detect supernumerary parathyroids. In one case, only the pathologic examination could find a microscopic fifth gland in the thymus. In the last case, the supernumerary gland was resected through a sternotomy after a recurrence of hyperparathyroidism, ten years after the initial four-gland parathyroidectomy without thymectomy. MIBI was capable of detecting this gland, but only in the recurrent setting. Cervical ultrasound did not detect any supernumerary glands. Conclusion: The frequency of supernumerary parathyroid gland in the HPT1/MEN1 patients studied (12.2%) was significant. Surgeons should be aware of the need to search for supernumerary glands during neck exploration, besides the thymus. Imaging examinations were not useful in the pre-surgical location of these glands, and one case presented a recurrence of hyperparathyroidism.
Resumo:
Objectives. To evaluate whether the overall dysphonia grade, roughness, breathiness, asthenia, and strain (GRBAS) scale, and the Consensus Auditory Perceptual Evaluation-Voice (CAPE-V) scale show the same reliability and consensus when applied to the same vocal sample at different times. Study Design. Observational cross-sectional study. Methods. Sixty subjects had their voices recorded according to the tasks proposed in the CAPE-V scale. Vowels /a/ and /i/ were sustained between 3 and 5 seconds. Reproduction of six sentences and spontaneous speech from the request "Tell me about your voice" were analyzed. For the analysis of the GRBAS scale, the sustained vowel and reading tasks of the sentences was used. Auditory-perceptual voice analyses were conducted by three expert speech therapists with more than 5 years of experience and familiar with both the scales. Results. A strong correlation was observed in the intrajudge consensus analysis, both for the GRBAS scale as well as for CAPE-V, with intraclass coefficient values ranging from 0.923 to 0.985. A high degree of correlation between the general GRBAS and CAPE-V grades (coefficient = 0.842) was observed, with similarities in the grades of dysphonia distribution in both scales. The evaluators indicated a mild difficulty in applying the GRBAS scale and low to mild difficulty in applying the CAPE-V scale. The three evaluators agreed when indicating the GRBAS scale as the fastest and the CAPE-V scale as the most sensitive, especially for detecting small changes in voice. Conclusions. The two scales are reliable and are indicated for use in analyzing voice quality.
Resumo:
Abstract Background The family Accipitridae (hawks, eagles and Old World vultures) represents a large radiation of predatory birds with an almost global distribution, although most species of this family occur in the Neotropics. Despite great morphological and ecological diversity, the evolutionary relationships in the family have been poorly explored at all taxonomic levels. Using sequences from four mitochondrial genes (12S, ATP8, ATP6, and ND6), we reconstructed the phylogeny of the Neotropical forest hawk genus Leucopternis and most of the allied genera of Neotropical buteonines. Our goals were to infer the evolutionary relationships among species of Leucopternis, estimate their relationships to other buteonine genera, evaluate the phylogenetic significance of the white and black plumage patterns common to most Leucopternis species, and assess general patterns of diversification of the group with respect to species' affiliations with Neotropical regions and habitats. Results Our molecular phylogeny for the genus Leucopternis and its allies disagrees sharply with traditional taxonomic arrangements for the group, and we present new hypotheses of relationships for a number of species. The mtDNA phylogenetic trees derived from analysis of the combined data posit a polyphyletic relationship among species of Leucopternis, Buteogallus and Buteo. Three highly supported clades containing Leucopternis species were recovered in our phylogenetic reconstructions. The first clade consisted of the sister pairs L. lacernulatus and Buteogallus meridionalis, and Buteogallus urubitinga and Harpyhaliaetus coronatus, in addition to L. schistaceus and L. plumbeus. The second clade included the sister pair Leucopternis albicollis and L. occidentalis as well as L. polionotus. The third lineage comprised the sister pair L. melanops and L. kuhli, in addition to L. semiplumbeus and Buteo buteo. According to our results, the white and black plumage patterns have evolved at least twice in the group. Furthermore, species found to the east and west of the Andes (cis-Andean and trans-Andean, respectively) are not reciprocally monophyletic, nor are forest and non-forest species. Conclusion The polyphyly of Leucopternis, Buteogallus and Buteo establishes a lack of concordance of current Accipitridae taxonomy with the mtDNA phylogeny for the group, and points to the need for further phylogenetic analysis at all taxonomic levels in the family as also suggested by other recent analyses. Habitat shifts, as well as cis- and trans-Andean disjunctions, took place more than once during buteonine diversification in the Neotropical region. Overemphasis of the black and white plumage patterns has led to questionable conclusions regarding the relationships of Leucopternis species, and suggests more generally that plumage characters should be used with considerable caution in the taxonomic evaluation of the Accipitridae.
Resumo:
Background The genetic mechanisms underlying interindividual blood pressure variation reflect the complex interplay of both genetic and environmental variables. The current standard statistical methods for detecting genes involved in the regulation mechanisms of complex traits are based on univariate analysis. Few studies have focused on the search for and understanding of quantitative trait loci responsible for gene × environmental interactions or multiple trait analysis. Composite interval mapping has been extended to multiple traits and may be an interesting approach to such a problem. Methods We used multiple-trait analysis for quantitative trait locus mapping of loci having different effects on systolic blood pressure with NaCl exposure. Animals studied were 188 rats, the progenies of an F2 rat intercross between the hypertensive and normotensive strain, genotyped in 179 polymorphic markers across the rat genome. To accommodate the correlational structure from measurements taken in the same animals, we applied univariate and multivariate strategies for analyzing the data. Results We detected a new quantitative train locus on a region close to marker R589 in chromosome 5 of the rat genome, not previously identified through serial analysis of individual traits. In addition, we were able to justify analytically the parametric restrictions in terms of regression coefficients responsible for the gain in precision with the adopted analytical approach. Conclusion Future work should focus on fine mapping and the identification of the causative variant responsible for this quantitative trait locus signal. The multivariable strategy might be valuable in the study of genetic determinants of interindividual variation of antihypertensive drug effectiveness.
Resumo:
Abstract Background The etiology of idiopathic scoliosis remains unknown and different factors have been suggested as causal. Hereditary factors can also determine the etiology of the disease; however, the pattern of inheritance remains unknown. Autosomal dominant, X-linked and multifactorial patterns of inheritances have been reported. Other studies have suggested possible chromosome regions related to the etiology of idiopathic scoliosis. We report the genetic aspects of and investigate chromosome regions for adolescent idiopathic scoliosis in a Brazilian family. Methods Evaluation of 57 family members, distributed over 4 generations of a Brazilian family, with 9 carriers of adolescent idiopathic scoliosis. The proband presented a scoliotic curve of 75 degrees, as determined by the Cobb method. Genomic DNA from family members was genotyped. Results Locating a chromosome region linked to adolescent idiopathic scoliosis was not possible in the family studied. Conclusion While it was not possible to determine a chromosome region responsible for adolescent idiopathic scoliosis by investigation of genetic linkage using microsatellites markers during analysis of four generations of a Brazilian family with multiple affected members, analysis including other types of genomic variations, like single nucleotide polymorphisms (SNPs) could contribute to the continuity of this study.
Resumo:
Background: Aortic aneurysm and dissection are important causes of death in older people. Ruptured aneurysms show catastrophic fatality rates reaching near 80%. Few population-based mortality studies have been published in the world and none in Brazil. The objective of the present study was to use multiple-cause-of-death methodology in the analysis of mortality trends related to aortic aneurysm and dissection in the state of Sao Paulo, between 1985 and 2009. Methods: We analyzed mortality data from the Sao Paulo State Data Analysis System, selecting all death certificates on which aortic aneurysm and dissection were listed as a cause-of-death. The variables sex, age, season of the year, and underlying, associated or total mentions of causes of death were studied using standardized mortality rates, proportions and historical trends. Statistical analyses were performed by chi-square goodness-of-fit and H Kruskal-Wallis tests, and variance analysis. The joinpoint regression model was used to evaluate changes in age-standardized rates trends. A p value less than 0.05 was regarded as significant. Results: Over a 25-year period, there were 42,615 deaths related to aortic aneurysm and dissection, of which 36,088 (84.7%) were identified as underlying cause and 6,527 (15.3%) as an associated cause-of-death. Dissection and ruptured aneurysms were considered as an underlying cause of death in 93% of the deaths. For the entire period, a significant increased trend of age-standardized death rates was observed in men and women, while certain non-significant decreases occurred from 1996/2004 until 2009. Abdominal aortic aneurysms and aortic dissections prevailed among men and aortic dissections and aortic aneurysms of unspecified site among women. In 1985 and 2009 death rates ratios of men to women were respectively 2.86 and 2.19, corresponding to a difference decrease between rates of 23.4%. For aortic dissection, ruptured and non-ruptured aneurysms, the overall mean ages at death were, respectively, 63.2, 68.4 and 71.6 years; while, as the underlying cause, the main associated causes of death were as follows: hemorrhages (in 43.8%/40.5%/13.9%); hypertensive diseases (in 49.2%/22.43%/24.5%) and atherosclerosis (in 14.8%/25.5%/15.3%); and, as associated causes, their principal overall underlying causes of death were diseases of the circulatory (55.7%), and respiratory (13.8%) systems and neoplasms (7.8%). A significant seasonal variation, with highest frequency in winter, occurred in deaths identified as underlying cause for aortic dissection, ruptured and non-ruptured aneurysms. Conclusions: This study introduces the methodology of multiple-causes-of-death to enhance epidemiologic knowledge of aortic aneurysm and dissection in São Paulo, Brazil. The results presented confer light to the importance of mortality statistics and the need for epidemiologic studies to understand unique trends in our own population.
Resumo:
Abstract Background An estimated 10–20 million individuals are infected with the retrovirus human T-cell leukemia virus type 1 (HTLV-1). While the majority of these individuals remain asymptomatic, 0.3-4% develop a neurodegenerative inflammatory disease, termed HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). HAM/TSP results in the progressive demyelination of the central nervous system and is a differential diagnosis of multiple sclerosis (MS). The etiology of HAM/TSP is unclear, but evidence points to a role for CNS-inflitrating T-cells in pathogenesis. Recently, the HTLV-1-Tax protein has been shown to induce transcription of the human endogenous retrovirus (HERV) families W, H and K. Intriguingly, numerous studies have implicated these same HERV families in MS, though this association remains controversial. Results Here, we explore the hypothesis that HTLV-1-infection results in the induction of HERV antigen expression and the elicitation of HERV-specific T-cells responses which, in turn, may be reactive against neurons and other tissues. PBMC from 15 HTLV-1-infected subjects, 5 of whom presented with HAM/TSP, were comprehensively screened for T-cell responses to overlapping peptides spanning HERV-K(HML-2) Gag and Env. In addition, we screened for responses to peptides derived from diverse HERV families, selected based on predicted binding to predicted optimal epitopes. We observed a lack of responses to each of these peptide sets. Conclusions Thus, although the limited scope of our screening prevents us from conclusively disproving our hypothesis, the current study does not provide data supporting a role for HERV-specific T-cell responses in HTLV-1 associated immunopathology.
Resumo:
OBJECTIVE: Sepsis is a common condition encountered in hospital environments. There is no effective treatment for sepsis, and it remains an important cause of death at intensive care units. This study aimed to discuss some methods that are available in clinics, and tests that have been recently developed for the diagnosis of sepsis. METHODS: A systematic review was performed through the analysis of the following descriptors: sepsis, diagnostic methods, biological markers, and cytokines. RESULTS: The deleterious effects of sepsis are caused by an imbalance between the invasiveness of the pathogen and the ability of the host to mount an effective immune response. Consequently, the host's immune surveillance fails to eliminate the pathogen, allowing it to spread. Moreover, there is a pro-inflammatory mediator release, inappropriate activation of the coagulation and complement cascades, leading to dysfunction of multiple organs and systems. The difficulty achieve total recovery of the patient is explainable. There is an increased incidence of sepsis worldwide due to factors such as aging population, larger number of surgeries, and number of microorganisms resistant to existing antibiotics. CONCLUSION: The search for new diagnostic markers associated with increased risk of sepsis development and molecules that can be correlated to certain steps of sepsis is becoming necessary. This would allow for earlier diagnosis, facilitate patient prognosis characterization, and prediction of possible evolution of each case. All other markers are regrettably constrained to research units.
Resumo:
Background – Hair follicle tumours generally present as benign, solitary masses and have a good prognosis following surgical resection. Hypothesis/Objectives – This report describes a case of multiple trichoblastomas in a dog. Animal – A 2-year-old crossbred dog presented with multiple soft cutaneous periocular, perilabial, submandibular and nasal nodules, between 2 and 9 cm in diameter, located on the right side of the face. New nodules were observed on the same side of the face at a second consultation 3 weeks later. Methods – Surgical resection of all nodules was performed in two procedures. Three nodules were initially resected and submitted for histolopathology and immunohistochemistry. The diagnosis was trichoblastoma for all three. At the time of the second consultation, new and remaining nodules were biopsied and the diagnosis of trichoblastoma confirmed. The dog was treated with doxorubicin and piroxicam for 30 days prior to the second surgical procedure in an attempt to reduce new tumour growth and the size of present tumours. All nodules were resected and the defects closed using rotation flaps. Results – No recurrence of the neoplasm was noted within 10 months after surgery. Conclusions and clinical importance – Trichoblastomas are generally benign but can present as multiple neoplasms that may require surgical resection and may respond to chemotherapy. To the authors’ knowledge, this is the first report of multiple trichoblastomas in a dog.
Resumo:
[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.
Resumo:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.