941 resultados para Chebyshev And Binomial Distributions
Resumo:
A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.
Resumo:
This letter presents a new recursive method for computing discrete polynomial transforms. The method is shown for forward and inverse transforms of the Hermite, binomial, and Laguerre transforms. The recursive flow diagrams require only 2 additions, 2( +1) memory units, and +1multipliers for the +1-point Hermite and binomial transforms. The recursive flow diagram for the +1-point Laguerre transform requires 2 additions, 2( +1) memory units, and 2( +1) multipliers. The transform computation time for all of these transforms is ( )
Resumo:
Background There is a lack of international research on suicide by drug overdose as a preventable suicide method. Sex- and age-specific rates of suicide by drug self-poisoning (ICD-10, X60-64) and the distribution of drug types used in 16 European countries were studied, and compared with other self-poisoning methods (X65-69) and intentional self-injury (X70-84). Methods Data for 2000-04/05 were collected from national statistical offices. Age-adjusted suicide rates, and age and sex distributions, were calculated. Results No pronounced sex differences in drug self-poisoning rates were found, either in the aggregate data (males 1.6 and females 1.5 per 100,000) or within individual countries. Among the 16 countries, the range (from some 0.3 in Portugal to 5.0 in Finland) was wide. 'Other and unspecified drugs' (X64) were recorded most frequently, with a range of 0.2-1.9, and accounted for more than 70% of deaths by drug overdose in France, Luxembourg, Portugal and Spain. Psychotropic drugs (X61) ranked second. The X63 category ('other drugs acting on the autonomic nervous system') was least frequently used. Finland showed low X64 and high X61 figures, Scotland had high levels of X62 ('narcotics and hallucinogens, not elsewhere classified') for both sexes, while England exceeded other countries in category X60. Risk was highest among the middle-aged everywhere except in Switzerland, where the elderly were most at risk. Conclusions Suicide by drug overdose is preventable. Intentional self-poisoning with drugs kills as many males as females. The considerable differences in patterns of self-poisoning found in the various European countries are relevant to national efforts to improve diagnostics of suicide and appropriate specific prevention. The fact that vast majority of drug-overdose suicides came under the category X64 refers to the need of more detailed ICD coding system for overdose suicides is needed to permit better design of suicide-prevention strategies at national level.
Resumo:
Dispersal and recruitment are central processes that shape the geographic and temporal distributions of populations of marine organisms. However, significant variability in factors such as reproductive output, larval transport, survival, and settlement success can alter the genetic identity of recruits from year to year. We designed a temporal and spatial sampling protocol to test for genetic heterogeneity among adults and recruits from multiple time points along a similar to 400 km stretch of the Oregon (USA) coastline. In total, 2824 adult and recruiting Balanus glandula were sampled between 2001 and 2008 from 9 sites spanning the Oregon coast. Consistent with previous studies, we observed high mitochondrial DNA diversity at the cytochrome oxidase I locus (884 unique haplotypes) and little to no spatial genetic population structure among the 9 sites (Phi(ST) = 0.00026, p = 0.170). However, subtle but significant temporal shifts in genetic composition were observed among year classes (Phi(ST) = 0.00071, p = 0.035), and spatial Phi(ST) varied from year to year. These temporal shifts in genetic structure were correlated with yearly differences in the strength of coastal upwelling (p = 0.002), with greater population structure observed in years with weaker upwelling. Higher levels of barnacle settlement were also observed in years with weaker upwelling (p < 0.001). These data suggest the hypothesis that low upwelling intensity maintains more local larvae close to shore, thereby shaping the genetic structure and settlement rate of recruitment year classes.
Resumo:
We consider inference in randomized studies, in which repeatedly measured outcomes may be informatively missing due to drop out. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and posit an exponential tilt model that links non-identifiable and identifiable distributions. This model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated and applied to data from the Breast Cancer Prevention Trial.
Resumo:
We present an overview of different methods for decomposing a multichannel spontaneous electroencephalogram (EEG) into sets of temporal patterns and topographic distributions. All of the methods presented here consider the scalp electric field as the basic analysis entity in space. In time, the resolution of the methods is between milliseconds (time-domain analysis), subseconds (time- and frequency-domain analysis) and seconds (frequency-domain analysis). For any of these methods, we show that large parts of the data can be explained by a small number of topographic distributions. Physically, this implies that the brain regions that generated one of those topographies must have been active with a common phase. If several brain regions are producing EEG signals at the same time and frequency, they have a strong tendency to do this in a synchronized mode. This view is illustrated by several examples (including combined EEG and functional magnetic resonance imaging (fMRI)) and a selective review of the literature. The findings are discussed in terms of short-lasting binding between different brain regions through synchronized oscillations, which could constitute a mechanism to form transient, functional neurocognitive networks.
Resumo:
One limitation to the widespread implementation of Monte Carlo (MC) patient dose-calculation algorithms for radiotherapy is the lack of a general and accurate source model of the accelerator radiation source. Our aim in this work is to investigate the sensitivity of the photon-beam subsource distributions in a MC source model (with target, primary collimator, and flattening filter photon subsources and an electron subsource) for 6- and 18-MV photon beams when the energy and radial distributions of initial electrons striking a linac target change. For this purpose, phase-space data (PSD) was calculated for various mean electron energies striking the target, various normally distributed electron energy spread, and various normally distributed electron radial intensity distributions. All PSD was analyzed in terms of energy, fluence, and energy fluence distributions, which were compared between the different parameter sets. The energy spread was found to have a negligible influence on the subsource distributions. The mean energy and radial intensity significantly changed the target subsource distribution shapes and intensities. For the primary collimator and flattening filter subsources, the distribution shapes of the fluence and energy fluence changed little for different mean electron energies striking the target, however, their relative intensity compared with the target subsource change, which can be accounted for by a scaling factor. This study indicates that adjustments to MC source models can likely be limited to adjusting the target subsource in conjunction with scaling the relative intensity and energy spectrum of the primary collimator, flattening filter, and electron subsources when the energy and radial distributions of the initial electron-beam change.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
We developed a gel- and label-free proteomics platform for comparative studies of human serum. The method involves the depletion of the six most abundant proteins, protein fractionation by Off-Gel IEF and RP-HPLC, followed by tryptic digestion, LC-MS/MS, protein identification, and relative quantification using probabilistic peptide match score summation (PMSS). We evaluated performance and reproducibility of the complete platform and the individual dimensions, by using chromatograms of the RP-HPLC runs, PMSS based abundance scores and abundance distributions as objective endpoints. We were interested if a relationship exists between the quantity ratio and the PMSS score ratio. The complete analysis was performed four times with two sets of serum samples containing different concentrations of spiked bovine beta-lactoglobulin (0.1 and 0.3%, w/w). The two concentrations resulted in significantly differing PMSS scores when compared to the variability in PMSS scores of all other protein identifications. We identified 196 proteins, of which 116 were identified four times in corresponding fractions whereof 73 qualified for relative quantification. Finally, we characterized the PMSS based protein abundance distributions with respect to the two dimensions of fractionation and discussed some interesting patterns representing discrete isoforms. We conclude that combination of Off-Gel electrophoresis (OGE) and HPLC is a reproducible protein fractionation technique, that PMSS is applicable for relative quantification, that the number of quantifiable proteins is always smaller than the number of identified proteins and that reproducibility of protein identifications should supplement probabilistic acceptance criteria.
Resumo:
We present an overview of our analyses of HiRISE observations of spring evolution of selected dune areas of the north polar erg. The north polar erg is covered annually by seasonal volatile ice layer, a mixture of CO2 and H2O with mineral dust contamination. In spring, this layer sublimes creating visually enigmatic phenomena, e.g. dark and bright fan-shaped deposits, dark–bright–dark bandings, dark down-slope streaks, and seasonal polygonal cracks. Similar phenomena in southern polar areas are believed to be related to the specific process of solid-state greenhouse effect. In the north, it is currently unclear if the solid-state greenhouse effect is able to explain all the observed phenomena especially because the increased influence of H2O on the time scales of this process has not yet been quantified. HiRISE observations of our selected locations show that the ground exhibits a temporal behaviour similar to the one observed in the southern polar areas: a brightening phase starting close to the spring equinox with a subsequent darkening towards summer solstice. The resolution of HiRISE enabled us to study dunes and substrate individually and even distinguish between different developments on windward and slip face sides of single dunes. Differences in the seasonal evolution between steep slip faces and flatter substrate and windward sides of dunes have been identified and compared to CRISM data of CO2 and H2O distributions on dunes. We also observe small scale dark blotches that appear in early observations and tend to sustain a low reflectivity throughout the spring. These blotches can be regarded as the analogue of dark fan deposits in southern polar areas, leading us to the conclusion that both martian polar areas follow similar spring evolutions.
Volcanic forcing for climate modeling: a new microphysics-based data set covering years 1600–present
Resumo:
As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry–climate model (CCM) simulations. This new volcanic forcing, covering the 1600–present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600–present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
Insufficient and unrepresentative participation in voluntary public hearings and policy discussions has been problematic since Aristotle's time. In fisheries, research has shown that involvement is dominated by financially resourceful and extreme-opinion stakeholders and tends to advantage groups that have a lower cost of attendance. Stakeholders may exhibit only one or all of these traits but can be still similarly advantaged. The opposites of these traits tend to characterize the disadvantaged, such as the middle-ground opinions, the less wealthy or organized, and the more remote stakeholders. Remoteness or distance is the most straightforward and objective of these characteristics to measure. We analyzed the New England Fishery Management Council's sign-in sheets for 2003-2006, estimating participants' travel distance and associations with the groundfish, scallop, and herring industries. We also evaluated the representativeness of participation by comparing attendance to landings and permit distributions. The distance analysis showed a significant correlation between attendance levels and costs via travel distance. These results suggest a potential bias toward those stakeholders residing closer to meeting locations, possibly disadvantaging parties who are further and must incur higher costs. However, few significant differences were found between the actual fishing industry and attendee distributions, suggesting that the geographical distribution of the meeting attendees is statistically similar to that of the larger fishery. The interpretation of these results must take into consideration the limited time span of the analysis, as policy changes may have altered the industry make-up and location prior to our study. Furthermore, the limited geographical input of stakeholders may lend bias to the Council's perception of ecological and social conditions throughout the spatial range of the fishery. These factors should be further considered in the policy-formation process in order to incorporate a broader range of stakeholder input.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^