890 resultados para monotone estimating
Resumo:
The European standard for gillnetsampling to characterize lake fish communities stratifies sampling effort (i.e., number of nets) within depth strata. Nets to sample benthic habitats are randomly distributed throughout the lake within each depth strata. Pelagic nets are also stratified by depth, but are set only at the deepest point of the lake. Multiple authors have suggested that this design under-represents pelagic habitats, resulting in estimates of whole-lake CPUE and community composition which are disproportionately influenced by ecological conditions of littoral and benthic habitats. To address this issue, researchers have proposed estimating whole-lake CPUE by weighting the catch rate in each depth-compartment by the proportion of the volume of the lake contributed by the compartment. Our study aimed to assess the effectiveness of volume-weighting by applying it to fish communities sampled according to the European standard (CEN), and by a second whole-lake gillnetting protocol (VERT), which prescribes additional fishing effort in pelagic habitats. We assume that convergence between the protocols indicates that volume-weighting provides a more accurate estimate of whole-lake catch rate and community composition. Our results indicate that volume-weighting improves agreement between the protocols for whole-lake total CPUE, estimated proportion of perch and roach and the overall fish community composition. Discrepancies between the protocols remaining after volume-weighting maybe because sampling under the CEN protocol overlooks horizontal variation in pelagic fish communities. Analyses based on multiple pelagic-set VERT nets identified gradients in the density and biomass of pelagic fish communities in almost half the lakes that corresponded with the depth of water at net-setting location and distance along the length of a lake. Additional CEN pelagic sampling effort allocated across water depths and distributed throughout the lake would therefore help to reconcile differences between the sampling protocols and, in combination with volume-weighting, converge on a more accurate estimate of whole-lake fish communities.
Resumo:
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both, the correct associations among the observations and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. The number S corresponds to the number of fences involved in the problem. Each fence consists of a set of observations where each observation belongs to a different object. The S ≥ 3 MTT problem is an NP-hard combinatorial optimization problem. There are two general ways to solve this. One way is to seek the optimum solution, this can be achieved by applying a branch-and- bound algorithm. When using these algorithms the problem has to be greatly simplified to keep the computational cost at a reasonable level. Another option is to approximate the solution by using meta-heuristic methods. These methods aim to efficiently explore the different possible combinations so that a reasonable result can be obtained with a reasonable computational effort. To this end several population-based meta-heuristic methods are implemented and tested on simulated optical measurements. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.
Resumo:
Well-established methods exist for measuring party positions, but reliable means for estimating intra-party preferences remain underdeveloped. While most efforts focus on estimating the ideal points of individual legislators based on inductive scaling of roll call votes, this data suffers from two problems: selection bias due to unrecorded votes and strong party discipline, which tends to make voting a strategic rather than a sincere indication of preferences. By contrast, legislative speeches are relatively unconstrained, as party leaders are less likely to punish MPs for speaking freely as long as they vote with the party line. Yet, the differences between roll call estimations and text scalings remain essentially unexplored, despite the growing application of statistical analysis of textual data to measure policy preferences. Our paper addresses this lacuna by exploiting a rich feature of the Swiss legislature: on most bills, legislators both vote and speak many times. Using this data, we compare text-based scaling of ideal points to vote-based scaling from a crucial piece of energy legislation. Our findings confirm that text scalings reveal larger intra-party differences than roll calls. Using regression models, we further explain the differences between roll call and text scalings by attributing differences to constituency-level preferences for energy policy.
Resumo:
Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40-70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory's own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.
Resumo:
Conclusion Using a second bone anchored hearing implant (BAHI) mounted on a testband in unilaterally implanted BAHI users to test its potential advantage pre-operatively under-estimates the advantage of two BAHIs placed on two implants. Objectives To investigate how well speech understanding with a second BAHI mounted on a testband approaches the benefit of bilaterally implanted BAHIs. Method Prospective study with 16 BAHI users. Eight were implanted unilaterally (group A) and eight were implanted bilaterally (group B). Aided speech understanding was measured. Speech was presented from the front and noise came either from the left, right, or from the front in two conditions for group A (with one BAHI, and with two BAHIs, where the second device was mounted on a testband) and in three conditions for group B (same two conditions as group A, and in addition with both BAHIs mounted on implants). Results Speech understanding in noise improved with the additional device for noise from the side of the first BAHI (+0.7 to +2.1 dB) and decreased for noise from the other side (-1.8 dB to -3.9 dB). Improvements were highest (+2.1 dB, p = 0.016) and disadvantages were smallest (-1.8 dB, p = 0.047) with both BAHIs mounted on implants. Testbands yielded smaller advantages and higher disadvantages of the additional BAHI (average difference = -0.9 dB).
Resumo:
The 15N ratio of nitrogen oxides (NOx) emitted from vehicles, measured in the air adjacent to a highway in the Swiss Middle Land, was very high [δ15N(NO2) = +5.7‰]. This high 15N abundance was used to estimate long-term NO2 dry deposition into a forest ecosystem by measuring δ15N in the needles and the soil of potted and autochthonous spruce trees [Picea abies (L.) Karst] exposed to NO2 in a transect orthogonal to the highway. δ15N in the current-year needles of potted trees was 2.0‰ higher than that of the control after 4 months of exposure close to the highway, suggesting a 25% contribution to the N-nutrition of these needles. Needle fall into the pots was prevented by grids placed above the soil, while the continuous decomposition of needle litter below the autochthonous trees over previous years has increased δ15N values in the soil, resulting in parallel gradients of δ15N in soil and needles with distance from the highway. Estimates of NO2 uptake into needles obtained from the δ15N data were significantly correlated with the inputs calculated with a shoot gas exchange model based on a parameterisation widely used in deposition modelling. Therefore, we provide an indication of estimated N inputs to forest ecosystems via dry deposition of NO2 at the receptor level under field conditions.
Resumo:
No abstract available.
Resumo:
Consider a nonparametric regression model Y=mu*(X) + e, where the explanatory variables X are endogenous and e satisfies the conditional moment restriction E[e|W]=0 w.p.1 for instrumental variables W. It is well known that in these models the structural parameter mu* is 'ill-posed' in the sense that the function mapping the data to mu* is not continuous. In this paper, we derive the efficiency bounds for estimating linear functionals E[p(X)mu*(X)] and int_{supp(X)}p(x)mu*(x)dx, where p is a known weight function and supp(X) the support of X, without assuming mu* to be well-posed or even identified.
Resumo:
The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^
Resumo:
External beam radiation therapy is used to treat nearly half of the more than 200,000 new cases of prostate cancer diagnosed in the United States each year. During a radiation therapy treatment, healthy tissues in the path of the therapeutic beam are exposed to high doses. In addition, the whole body is exposed to a low-dose bath of unwanted scatter radiation from the pelvis and leakage radiation from the treatment unit. As a result, survivors of radiation therapy for prostate cancer face an elevated risk of developing a radiogenic second cancer. Recently, proton therapy has been shown to reduce the dose delivered by the therapeutic beam to normal tissues during treatment compared to intensity modulated x-ray therapy (IMXT, the current standard of care). However, the magnitude of stray radiation doses from proton therapy, and their impact on this incidence of radiogenic second cancers, was not known. ^ The risk of a radiogenic second cancer following proton therapy for prostate cancer relative to IMXT was determined for 3 patients of large, median, and small anatomical stature. Doses delivered to healthy tissues from the therapeutic beam were obtained from treatment planning system calculations. Stray doses from IMXT were taken from the literature, while stray doses from proton therapy were simulated using a Monte Carlo model of a passive scattering treatment unit and an anthropomorphic phantom. Baseline risk models were taken from the Biological Effects of Ionizing Radiation VII report. A sensitivity analysis was conducted to characterize the uncertainty of risk calculations to uncertainties in the risk model, the relative biological effectiveness (RBE) of neutrons for carcinogenesis, and inter-patient anatomical variations. ^ The risk projections revealed that proton therapy carries a lower risk for radiogenic second cancer incidence following prostate irradiation compared to IMXT. The sensitivity analysis revealed that the results of the risk analysis depended only weakly on uncertainties in the risk model and inter-patient variations. Second cancer risks were sensitive to changes in the RBE of neutrons. However, the findings of the study were qualitatively consistent for all patient sizes and risk models considered, and for all neutron RBE values less than 100. ^
Resumo:
The economic impact of research misconduct in medical research has been unexplored. While research misconduct in publicly funded medical research has increasingly been the object of discussion, public policy debate, government and institutional action, and scientific research, the costs of research misconduct have been unexamined. The author develops a model to estimate the per case cost of research misconduct, specifically the costs of fabrication, falsification, and plagiarism, in publicly funded medical research. Using the database of Research Misconduct Findings maintained by the Office of Research Integrity, Department of Health and Human Services, the model is used to estimate costs of research misconduct in public funded medical research among faculty during the period 2000-2005.^
Resumo:
A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^
Resumo:
A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^