211 resultados para data accuracy
Resumo:
Unauthorized accesses to digital contents are serious threats to international security and informatics. We propose an offline oblivious data distribution framework that preserves the sender's security and the receiver's privacy using tamper-proof smart cards. This framework provides persistent content protections from digital piracy and promises private content consumption.
Resumo:
Objective: To illustrate methodological issues involved in estimating dietary trends in populations using data obtained from various sources in Australia in the 1980s and 1990s. Methods: Estimates of absolute and relative change in consumption of selected food items were calculated using national data published annually on the national food supply for 1982-83 to 1992-93 and responses to food frequency questions in two population based risk factor surveys in 1983 and 1994 in the Hunter Region of New South Wales, Australia. The validity of estimated food quantities obtained from these inexpensive sources at the beginning of the period was assessed by comparison with data from a national dietary survey conducted in 1983 using 24 h recall. Results: Trend estimates from the food supply data and risk factor survey data were in good agreement for increases in consumption of fresh fruit, vegetables and breakfast food and decreases in butter, margarine, sugar and alcohol. Estimates for trends in milk, eggs and bread consumption, however, were inconsistent. Conclusions: Both data sources can be used for monitoring progress towards national nutrition goals based on selected food items provided that some limitations are recognized. While data collection methods should be consistent over time they also need to allow for changes in the food supply (for example the introduction of new varieties such as low-fat dairy products). From time to time the trends derived from these inexpensive data sources should be compared with data derived from more detailed and quantitative estimates of dietary intake.
Resumo:
The Edinburgh-Cape Blue Object Survey is a major survey to discover blue stellar objects brighter than B similar to 18 in the southern sky. It is planned to cover an area of sky of 10 000 deg(2) with \b\ > 30 degrees and delta < 0 degrees. The blue stellar objects are selected by automatic techniques from U and B pairs of UK Schmidt Telescope plates scanned with the COSMOS measuring machine. Follow-up photometry and spectroscopy are being obtained with the SAAO telescopes to classify objects brighter than B = 16.5. This paper describes the survey, the techniques used to extract the blue stellar objects, the photometric methods and accuracy, the spectroscopic classification, and the limits and completeness of the survey.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The use of cell numbers rather than mass to quantify the size of the biotic phase in animal cell cultures causes several problems. First, the cell size varies with growth conditions, thus yields expressed in terms of cell numbers cannot be used in the normal mass balance sense. Second, experience from microbial systems shows that cell number dynamics lag behind biomass dynamics. This work demonstrates that this lag phenomenon also occurs in animal cell culture. Both the lag phenomenon and the variation in cell size are explained using a simple model of the cell cycle. The basis for the model is that onset of DNA synthesis requires accumulation of G1 cyclins to a prescribed level. This requirement is translated into a requirement for a cell to reach a critical size before commencement of DNA synthesis. A slower gl-owing cell will spend more time in G1 before reaching the critical mass. In contrast, the period between onset of DNA synthesis and mitosis, tau(B), is fixed. The two parameters in the model, the critical size and tau(B), were determined from eight steady-state measurements of mean cell size in a continuous hybridoma culture. Using these parameters, it was possible to predict with reasonable accuracy the transient behavior in a separate shift-up culture, i.e., a culture where cells were transferred from a lean environment to a rich environment. The implications for analyzing experimental data for animal cell culture are discussed. (C) 1997 John Wiley & Sons, Inc.
Resumo:
In order to separate the effects of experience from other characteristics of word frequency (e.g., orthographic distinctiveness), computer science and psychology students rated their experience with computer science technical items and nontechnical items from a wide range of word frequencies prior to being tested for recognition memory of the rated items. For nontechnical items, there was a curvilinear relationship between recognition accuracy and word frequency for both groups of students. The usual superiority of low-frequency words was demonstrated and high-frequency words were recognized least well. For technical items, a similar curvilinear relationship was evident for the psychology students, but for the computer science students, recognition accuracy was inversely related to word frequency. The ratings data showed that subjective experience rather than background word frequency was the better predictor of recognition accuracy.
Resumo:
Both hysterectomy and tubal sterilisation offer significant protection from ovarian cancer, and the risk of cardiovascular disease in women is lowered after hysterectomy. Since little is known about the accuracy of women's self-reports of these procedures, we assessed their reliability and validity using data obtained in a case-control study of ovarian cancer. There was 100 per cent repeatability for both positive and negative histories of hysterectomy and tubal sterilisation among a small sample of women on reinterview. Verification of surgery was sought against surgeons' or medical records, or if these were unavailable, from randomly selected current general practitioners for 51 cases and 155 controls reporting a hysterectomy and 73 cases and 137 controls reporting a tubal sterilisation. Validation rate for self-reported hysterectomy against medical reports (32 cases, 96 controls) was 96 per cent (95 per cent confidence interval (CI) 91 to 99) and for tubal sterilisation (32 cases, 77 controls) it was 88 per cent (CI 81 to 93), which is likely to be an underestimate. Although findings are based on small numbers of women for whom medical reports could be ascertained, they are consistent with other findings that suggest women have good recall of past histories of hysterectomy and tubal sterilisation; this allows long-term effects of these procedures to be studied with reasonable accuracy from self-reports.