856 resultados para Population set-based methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distribution of polymorphisms related to glutathione S-transferases (GST) has been described in different populations, mainly for white individuals. We evaluated the distribution of GST mu (GSTM1) and theta (GSTT1) genotypes in 594 individuals, by multiplex PCR-based methods, using amplification of the exon 7 of CYP1A1 gene as an internal control. In São Paulo, 233 whites, 87 mulattos, and 137 blacks, all healthy blood-donor volunteers, were tested. In Bahia, where black and mulatto populations are more numerous, 137 subjects were evaluated. The frequency of the GSTM1 null genotype was significantly higher among whites (55.4%) than among mulattos (41.4%; P = 0.03) and blacks (32.8%; P < 0.0001) from São Paulo, or Bahian subjects in general (35.7%; P = 0.0003). There was no statistically different distribution among any non-white groups. The distribution of GSTT1 null genotype among groups did not differ significantly. The agreement between self-reported and interviewer classification of skin color in the Bahian group was low. The interviewer classification indicated a gradient of distribution of the GSTM1 null genotype from whites (55.6%) to light mulattos (40.4%), dark mulattos (32.0%) and blacks (28.6%). However, any information about race or ethnicity should be considered with caution regarding the bias introduced by different data collection techniques, specially in countries where racial admixture is intense, and ethnic definition boundaries are loose. Because homozygous deletions of GST gene might be associated with cancer risk, a better understanding of chemical metabolizing gene distribution can contribute to risk assessment of humans exposed to environmental carcinogens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluid handling systems account for a significant share of the global consumption of electrical energy. They also suffer from problems, which reduce their energy efficiency and increase life-cycle costs. Detecting or predicting these problems in time can make fluid handling systems more environmentally and economically sustainable to operate. In this Master’s Thesis, significant problems in fluid systems were studied and possibilities to develop variable-speed-drive-based detection methods for them was discussed. A literature review was conducted to find significant problems occurring in fluid handling systems containing pumps, fans and compressors. To find case examples for evaluating the feasibility of variable-speed-drive-based methods, queries were sent to industrial companies. As a result of this, the possibility to detect heat exchanger fouling with a variable-speed drive was analysed with data from three industrial cases. It was found that a mass flow rate estimate, which can be generated with a variable speed drive, can be used together with temperature measurements to monitor a heat exchanger’s thermal performance. Secondly, it was found that the fouling-related increase in the pressure drop of a heat exchanger can be monitored with a variable speed drive. Lastly, for systems where the flow device is speed controlled with by a pressure measurement, it was concluded that increasing rotational speed can be interpreted as progressing fouling in the heat exchanger.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Morphometric methods permit identification of insect species and are an aid for taxonomy. Quantitative wing traits were used to identify male euglossine bees. Landmark- and outline-based methods have been primarily used independently. Here, we combine the two methods using five Euglossa. Landmark-based methods correctly classified 84% and outline-based 77%, but an integrated analysis correctly classified 91% of samples. Some species presented significantly high reclassification percentages when only wing cell contour was considered, and correct identification of specimens with damaged wings was also obtained using this methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International Journal of Paediatric Dentistry 2012; 22: 459466 Aim. This in vitro study aimed to test the performance of fluorescence-based methods in detecting occlusal caries lesions in primary molars compared to conventional methods. Design. Two examiners assessed 113 sites on 77 occlusal surfaces of primary molars using three fluorescence devices: DIAGNOdent (LF), DIAGNOdent pen (LFpen), and fluorescence camera (VistaProof-FC). Visual inspection (ICDAS) and radiographic methods were also evaluated. One examiner repeated the evaluations after one month. As reference standard method, the lesion depth was determined after sectioning and evaluation in stereomicroscope. The area under the ROC curve (Az), sensitivity, specificity, and accuracy of the methods were calculated at enamel (D1) and dentine caries (D3) lesions thresholds. The intra and interexaminer reproducibility were calculated using the intraclass correlation coefficient (ICC) and kappa statistics. Results. At D1, visual inspection presented higher sensitivities (0.970.99) but lower specificities (0.180.25). At D3, all the methods demonstrated similar performance (Az values around 0.90). Visual and radiographic methods showed a slightly higher specificity (values higher than 0.96) than the fluorescence based ones (values around 0.88). In general, all methods presented high reproducibility (ICC higher than 0.79). Conclusions. Although fluorescence-based and conventional methods present similar performance in detecting occlusal caries lesions in primary teeth, visual inspection alone seems to be sufficient to be used in clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of this work concerns nonparametric permutation-based methods aiming to find a ranking (stochastic ordering) of a given set of groups (populations), gathering together information from multiple variables under more than one experimental designs. The problem of ranking populations arises in several fields of science from the need of comparing G>2 given groups or treatments when the main goal is to find an order while taking into account several aspects. As it can be imagined, this problem is not only of theoretical interest but it also has a recognised relevance in several fields, such as industrial experiments or behavioural sciences, and this is reflected by the vast literature on the topic, although sometimes the problem is associated with different keywords such as: "stochastic ordering", "ranking", "construction of composite indices" etc., or even "ranking probabilities" outside of the strictly-speaking statistical literature. The properties of the proposed method are empirically evaluated by means of an extensive simulation study, where several aspects of interest are let to vary within a reasonable practical range. These aspects comprise: sample size, number of variables, number of groups, and distribution of noise/error. The flexibility of the approach lies mainly in the several available choices for the test-statistic and in the different types of experimental design that can be analysed. This render the method able to be tailored to the specific problem and the to nature of the data at hand. To perform the analyses an R package called SOUP (Stochastic Ordering Using Permutations) has been written and it is available on CRAN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a large and long-lived species with high economic value, restricted spawning areas and short spawning periods, the Atlantic bluefin tuna (BFT; Thunnus thynnus) is particularly susceptible to over-exploitation. Although BFT have been targeted by fisheries in the Mediterranean Sea for thousands of years, it has only been in these last decades that the exploitation rate has reached far beyond sustainable levels. An understanding of the population structure, spatial dynamics, exploitation rates and the environmental variables that affect BFT is crucial for the conservation of the species. The aims of this PhD project were 1) to assess the accuracy of larval identification methods, 2) determine the genetic structure of modern BFT populations, 3) assess the self-recruitment rate in the Gulf of Mexico and Mediterranean spawning areas, 4) estimate the immigration rate of BFT to feeding aggregations from the various spawning areas, and 5) develop tools capable of investigating the temporal stability of population structuring in the Mediterranean Sea. Several weaknesses in modern morphology-based taxonomy including demographic decline of expert taxonomists, flawed identification keys, reluctance of the taxonomic community to embrace advances in digital communications and a general scarcity of modern user-friendly materials are reviewed. Barcoding of scombrid larvae revealed important differences in the accuracy of the taxonomic identifications carried out by different ichthyoplanktologists following morphology-based methods. Using a Genotyping-by-Sequencing a panel of 95 SNPs was developed and used to characterize the population structuring of BFT and composition of adult feeding aggregations. Using novel molecular techniques, DNA was extracted from bluefin tuna vertebrae excavated from late iron age, ancient roman settlements Byzantine-era Constantinople and a 20th century collection. A second panel of 96 SNPs was developed to genotype historical and modern samples in order to elucidate changes in population structuring and allele frequencies of loci associated with selective traits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Congestive heart failure has long been one of the most serious medical conditions in the United States; in fact, in the United States alone, heart failure accounts for 6.5 million days of hospitalization each year. One important goal of heart-failure therapy is to inhibit the progression of congestive heart failure through pharmacologic and device-based therapies. Therefore, there have been efforts to develop device-based therapies aimed at improving cardiac reserve and optimizing pump function to meet metabolic requirements. The course of congestive heart failure is often worsened by other conditions, including new-onset arrhythmias, ischemia and infarction, valvulopathy, decompensation, end-organ damage, and therapeutic refractoriness, that have an impact on outcomes. The onset of such conditions is sometimes heralded by subtle pathophysiologic changes, and the timely identification of these changes may promote the use of preventive measures. Consequently, device-based methods could in the future have an important role in the timely identification of the subtle pathophysiologic changes associated with congestive heart failure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on an order-theoretic approach, we derive sufficient conditions for the existence, characterization, and computation of Markovian equilibrium decision processes and stationary Markov equilibrium on minimal state spaces for a large class of stochastic overlapping generations models. In contrast to all previous work, we consider reduced-form stochastic production technologies that allow for a broad set of equilibrium distortions such as public policy distortions, social security, monetary equilibrium, and production nonconvexities. Our order-based methods are constructive, and we provide monotone iterative algorithms for computing extremal stationary Markov equilibrium decision processes and equilibrium invariant distributions, while avoiding many of the problems associated with the existence of indeterminacies that have been well-documented in previous work. We provide important results for existence of Markov equilibria for the case where capital income is not increasing in the aggregate stock. Finally, we conclude with examples common in macroeconomics such as models with fiat money and social security. We also show how some of our results extend to settings with unbounded state spaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human identification from a skull is a critical process in legal and forensic medicine, specially when no other means are available. Traditional clay-based methods attempt to generate the human face, in order to identify the corresponding person. However, these reconstructions lack of objectivity and consistence, since they depend on the practitioner. Current computerized techniques are based on facial models, which introduce undesired facial features when the final reconstruction is built. This paper presents an objective 3D craniofacial reconstruction technique, implemented in a graphic application, without using any facial template. The only information required by the software tool is the 3D image of the target skull and three parameters: age, gender and Body Mass Index (BMI) of the individual. Complexity is minimized, since the application database only consists of the anthropological information provided by soft tissue depth values in a set of points of the skull.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine the occurrence of the ≈300 known protein folds in different groups of organisms. To do this, we characterize a large fraction of the currently known protein sequences (≈140,000) in structural terms, by matching them to known structures via sequence comparison (or by secondary-structure class prediction for those without structural homologues). Overall, we find that an appreciable fraction of the known folds are present in each of the major groups of organisms (e.g., bacteria and eukaryotes share 156 of 275 folds), and most of the common folds are associated with many families of nonhomologous sequences (i.e., >10 sequence families for each common fold). However, different groups of organisms have characteristically distinct distributions of folds. So, for instance, some of the most common folds in vertebrates, such as globins or zinc fingers, are rare or absent in bacteria. Many of these differences in fold usage are biologically reasonable, such as the folds of metabolic enzymes being common in bacteria and those associated with extracellular transport and communication being common in animals. They also have important implications for database-based methods for fold recognition, suggesting that an unknown sequence from a plant is more likely to have a certain fold (e.g., a TIM barrel) than an unknown sequence from an animal.