889 resultados para Step count
Resumo:
The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.
Resumo:
Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
An amorphous, catechol-based analogue of PEEK ("o-PEEK") has been prepared by a classical step-growth polymerization reaction between catechol and 4,4'-difluorobenzophenone and shown to be readily soluble in a range of organic solvents. Copolymers with p-PEEK have been investigated, including an amorphous 50: 50 composition and a semicrystalline though still organic-soluble material comprising 70% p-PEEK. o-PEEK has also been obtained by entropy-driven ring-opening polymerization of the macrocyclic oligomers (MCO's) formed by cyclo-condensation of catechol with 4,4'-difluorobenzophenone under pseudo-high-dilution conditions. The principal products of this latter reaction were the cyclic dimer 3a (20 wt %), cyclic trimer 3b (16%) cyclic tetramer 3c (14%), cyclic pentamer 3d (13%) and cyclic hexamer 3e (12%). Macrocycles 3a-c were isolated as pure compounds by gradient column chromatography, and the structures of the cyclic dimer 3a and cyclic tetramer 3c were analyzed by single-crystal X-ray diffraction. A mixture of MCO's, 3, of similar composition, was obtained by cyclodepolymerization of high molar mass o-PEEK in dilute soluion.
Resumo:
Stirring of N-(2-carboxybenzoyl) anthranilic acid with anilines and amines such as p-toluidine, benzylamine, methyl esters of Leu, Phe, Ile and Val in presence of DCC produces N- 2 substituted 3-phenyliminoisoindolinones in very good yields. Single crystal X-ray diffraction studies and solution phase NMR and CD studies reveal that the 3-phenyliminoisoindolinone moiety is a turn-inducing scaffold which should be useful for reverse-turn mimetics.
Resumo:
Using combination of Mn-Co transition metal species with N-hydroxyphthalimide as a catalyst for one-step oxidation of cyclohexane with molecular oxygen in acetic acid at 353 K can give more than 95% selectivity towards oxygenated products with adipic acid as a major product at a high conversion (ca. 78%). A turnover number of 74 for this partial oxidation are also recorded.
Recovery and purification of surfactin from fermentation broth by a two-step ultrafiltration process
Resumo:
Surfactin is a bacterial lipopeptide produced by Bacillus subtilis and it is a powerful surfactant, having also antiviral, antibacterial and antitumor properties. The recovery and purification of surfactin from complex fermentation broths is a major obstacle to its commercialization; therefore, two-step membrane filtration processes were evaluated using centrifugal and stirred cell devices while the mechanisms of separation were investigated by particle size and surface charge measurements. In a first step of ultrafiltration (UF-1), surfactin was retained effectively by membranes at above its critical micelle concentration (CMC); subsequently in UF-2, the retentate micelles were disrupted by addition of 50% (v/v) methanol solution to allow recovery of surfactin in the permeate. Main protein contaminants were effective]), retained by the membrane in UF-2. Ultrafiltration was carried out either using centrifugal devices with 30 and 10 kDa MWCO regenerated cellulose membranes, or a stirred cell device with 10 kDa MWCO polyethersulfone (PES) and regenerated cellulose (RC) membranes. Total rejection of surfactin was consistently observed in UF-1, while in UF-2 PES membranes had the lowest rejection coefficient of 0.08 +/- 0.04. It was found that disruption of surfactin micelles, aggregation of protein contaminants and electrostatic interactions in UF-2 can further improve the selectivity of the membrane based purification technique. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.
Resumo:
A signalling procedure is described involving a connection, via the Internet, between the nervous system of an able-bodied individual and a robotic prosthesis, and between the nervous systems of two able-bodied human subjects. Neural implant technology is used to directly interface each nervous system with a computer. Neural motor unit and sensory receptor recordings are processed real-time and used as the communication basis. This is seen as a first step towards thought communication, in which the neural implants would be positioned in the central nervous systems of two individuals.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
In this paper stability of one-step ahead predictive controllers based on non-linear models is established. It is shown that, under conditions which can be fulfilled by most industrial plants, the closed-loop system is robustly stable in the presence of plant uncertainties and input–output constraints. There is no requirement that the plant should be open-loop stable and the analysis is valid for general forms of non-linear system representation including the case out when the problem is constraint-free. The effectiveness of controllers designed according to the algorithm analyzed in this paper is demonstrated on a recognized benchmark problem and on a simulation of a continuous-stirred tank reactor (CSTR). In both examples a radial basis function neural network is employed as the non-linear system model.
Resumo:
This paper presents a new image data fusion scheme by combining median filtering with self-organizing feature map (SOFM) neural networks. The scheme consists of three steps: (1) pre-processing of the images, where weighted median filtering removes part of the noise components corrupting the image, (2) pixel clustering for each image using self-organizing feature map neural networks, and (3) fusion of the images obtained in Step (2), which suppresses the residual noise components and thus further improves the image quality. It proves that such a three-step combination offers an impressive effectiveness and performance improvement, which is confirmed by simulations involving three image sensors (each of which has a different noise structure).
Resumo:
Adaptive least mean square (LMS) filters with or without training sequences, which are known as training-based and blind detectors respectively, have been formulated to counter interference in CDMA systems. The convergence characteristics of these two LMS detectors are analyzed and compared in this paper. We show that the blind detector is superior to the training-based detector with respect to convergence rate. On the other hand, the training-based detector performs better in the steady state, giving a lower excess mean-square error (MSE) for a given adaptation step size. A novel decision-directed LMS detector which achieves the low excess MSE of the training-based detector and the superior convergence performance of the blind detector is proposed.