945 resultados para Estimator standard error and efficiency
Resumo:
Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.
Resumo:
One of the enablers for new consumer electronics based products to be accepted in to the market is the availability of inexpensive, flexible and multi-standard chipsets and services. DVB-T, the principal standard for terrestrial broadcast of digital video in Europe, has been extremely successful in leading to governments reconsidering their targets for analogue television broadcast switch-off. To enable one further small step in creating increasingly cost effective chipsets, the ODFM deterministic equalizer has been presented before with its application to DVB-T. This paper discusses the test set-up of a DVB-T compliant baseband simulation that includes the deterministic equalizer and DVB-T standard propagation channels. This is then followed by a presentation of the found inner and outer Bit Error Rate (BER) results using various modulation levels, coding rates and propagation channels in order to ascertain the actual performance of the deterministic equalizer(1).
Resumo:
This paper reports the findings of a small-scale research project which investigated the levels of awareness and knowledge of written standard English of 10 and 11 year old children in two English primary schools. The project involved repeating in 2010 a written questionnaire previously used with children in the same schools in three separate surveys in 1999, 2002 and 2005. Data from the latest survey are compared to those from the previous three. The analysis seeks to identify any changes over time in children’s ability to recognise non-standard forms and supply standard English alternatives, as well as their ability to use technical terms related to language variation. Differences between the performance of boys and girls and that of the two schools are also analysed. The paper concludes that the socio-economic context of the schools may be a more important factor than gender in variations over time identified in the data.
Resumo:
The relationship between valuations and the subsequent sale price continues to be a matter of both theoretical and practical interest. This paper reports the analysis of over 700 property sales made during the 1974/90 period. Initial results imply an average under-valuation of 7% and a standard error of 18% across the sample. A number of techniques are applied to the data set using other variables such as the region, the type of property and the return from the market to explain the difference between the valuation and the subsequent sale price. The analysis reduces the unexplained error; the bias is fully accounted for and the standard error is reduced to 15.3%. This model finds that about 6% of valuations over-estimated the sale price by more than 20% and about 9% of the valuations under-estimated the sale prices by more than 20%. The results suggest that valuations are marginally more accurate than might be expected, both from consideration of theoretical considerations and from comparison with the equivalent valuation in equity markets.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.
Resumo:
Abstract Background: The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behaviour, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analysing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. Significant differences in inter-examiner results may lead to completely distinct clinical interpretations of the state of the auditory system. In this context, the aim of this research was to evaluate the inter-examiner agreement and variability in the manual classification of ABR. Methods: A total of 160 ABR data samples were collected, for four different stimulus intensity (80dBHL, 60dBHL, 40dBHL and 20dBHL), from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). Four examiners with expertise in the manual classification of ABR components participated in the study. The Bland-Altman statistical method was employed for the assessment of inter-examiner agreement and variability. The mean, standard deviation and error for the bias, which is the difference between examiners’ annotations, were estimated for each pair of examiners. Scatter plots and histograms were employed for data visualization and analysis. Results: In most comparisons the differences between examiner’s annotations were below 0.1 ms, which is clinically acceptable. In four cases, it was found a large error and standard deviation (>0.1 ms) that indicate the presence of outliers and thus, discrepancies between examiners. Conclusions: Our results quantify the inter-examiner agreement and variability of the manual analysis of ABR data, and they also allows for the determination of different patterns of manual ABR analysis.
Resumo:
The Code for Sustainable Homes (the Code) will require new homes in the United Kingdom to be ‘zero carbon’ from 2016. Drawing upon an evolutionary innovation perspective, this paper contributes to a gap in the literature by investigating which low and zero carbon technologies are actually being used by house builders, rather than the prevailing emphasis on the potentiality of these technologies. Using the results from a questionnaire three empirical contributions are made. First, house builders are selecting a narrow range of technologies. Second, these choices are made to minimise the disruption to their standard design and production templates (SDPTs). Finally, the coalescence around a small group of technologies is expected to intensify with solar-based technologies predicted to become more important. This paper challenges the dominant technical rationality in the literature that technical efficiency and cost benefits are the primary drivers for technology selection. These drivers play an important role but one which is mediated by the logic of maintaining the SDPTs of the house builders. This emphasises the need for construction diffusion of innovation theory to be problematized and developed within the context of business and market regimes constrained and reproduced by resilient technological trajectories.
Resumo:
Aerosols affect the Earth's energy budget directly by scattering and absorbing radiation and indirectly by acting as cloud condensation nuclei and, thereby, affecting cloud properties. However, large uncertainties exist in current estimates of aerosol forcing because of incomplete knowledge concerning the distribution and the physical and chemical properties of aerosols as well as aerosol-cloud interactions. In recent years, a great deal of effort has gone into improving measurements and datasets. It is thus feasible to shift the estimates of aerosol forcing from largely model-based to increasingly measurement-based. Our goal is to assess current observational capabilities and identify uncertainties in the aerosol direct forcing through comparisons of different methods with independent sources of uncertainties. Here we assess the aerosol optical depth (τ), direct radiative effect (DRE) by natural and anthropogenic aerosols, and direct climate forcing (DCF) by anthropogenic aerosols, focusing on satellite and ground-based measurements supplemented by global chemical transport model (CTM) simulations. The multi-spectral MODIS measures global distributions of aerosol optical depth (τ) on a daily scale, with a high accuracy of ±0.03±0.05τ over ocean. The annual average τ is about 0.14 over global ocean, of which about 21%±7% is contributed by human activities, as estimated by MODIS fine-mode fraction. The multi-angle MISR derives an annual average AOD of 0.23 over global land with an uncertainty of ~20% or ±0.05. These high-accuracy aerosol products and broadband flux measurements from CERES make it feasible to obtain observational constraints for the aerosol direct effect, especially over global the ocean. A number of measurement-based approaches estimate the clear-sky DRE (on solar radiation) at the top-of-atmosphere (TOA) to be about -5.5±0.2 Wm-2 (median ± standard error from various methods) over the global ocean. Accounting for thin cirrus contamination of the satellite derived aerosol field will reduce the TOA DRE to -5.0 Wm-2. Because of a lack of measurements of aerosol absorption and difficulty in characterizing land surface reflection, estimates of DRE over land and at the ocean surface are currently realized through a combination of satellite retrievals, surface measurements, and model simulations, and are less constrained. Over the oceans the surface DRE is estimated to be -8.8±0.7 Wm-2. Over land, an integration of satellite retrievals and model simulations derives a DRE of -4.9±0.7 Wm-2 and -11.8±1.9 Wm-2 at the TOA and surface, respectively. CTM simulations derive a wide range of DRE estimates that on average are smaller than the measurement-based DRE by about 30-40%, even after accounting for thin cirrus and cloud contamination. A number of issues remain. Current estimates of the aerosol direct effect over land are poorly constrained. Uncertainties of DRE estimates are also larger on regional scales than on a global scale and large discrepancies exist between different approaches. The characterization of aerosol absorption and vertical distribution remains challenging. The aerosol direct effect in the thermal infrared range and in cloudy conditions remains relatively unexplored and quite uncertain, because of a lack of global systematic aerosol vertical profile measurements. A coordinated research strategy needs to be developed for integration and assimilation of satellite measurements into models to constrain model simulations. Enhanced measurement capabilities in the next few years and high-level scientific cooperation will further advance our knowledge.
Resumo:
Two different TAMSAT (Tropical Applications of Meteorological Satellites) methods of rainfall estimation were developed for northern and southern Africa, based on Meteosat images. These two methods were used to make rainfall estimates for the southern rainy season from October 1995 to April 1996. Estimates produced by both TAMSAT methods and estimates produced by the CPC (Climate Prediction Center) method were then compared with kriged data from over 800 raingauges in southern Africa. This shows that operational TAMSAT estimates are better over plateau regions, with 59% of estimates within one standard error (s.e.) of the kriged rainfall. Over mountainous regions the CPC approach is generally better, although all methods underestimate and give only 40% of estimates within 1 s.e. The two TAMSAT methods show little difference across a whole season, but when looked at in detail the northern method gives unsatisfactory calibrations. The CPC method does have significant overall improvements by building in real-time raingauge data, but only where sufficient raingauges are available.
Resumo:
The Lincoln–Petersen estimator is one of the most popular estimators used in capture–recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11, f10, f01, f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln–Petersen estimator provides an estimate for f00. In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln–Petersen’s, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln–Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey.
Resumo:
Systematic review (SR) is a rigorous, protocol-driven approach designed to minimise error and bias when summarising the body of research evidence relevant to a specific scientific question. Taking as a comparator the use of SR in synthesising research in healthcare, we argue that SR methods could also pave the way for a “step change” in the transparency, objectivity and communication of chemical risk assessments (CRA) in Europe and elsewhere. We suggest that current controversies around the safety of certain chemicals are partly due to limitations in current CRA procedures which have contributed to ambiguity about the health risks posed by these substances. We present an overview of how SR methods can be applied to the assessment of risks from chemicals, and indicate how challenges in adapting SR methods from healthcare research to the CRA context might be overcome. Regarding the latter, we report the outcomes from a workshop exploring how to increase uptake of SR methods, attended by experts representing a wide range of fields related to chemical toxicology, risk analysis and SR. Priorities which were identified include: the conduct of CRA-focused prototype SRs; the development of a recognised standard of reporting and conduct for SRs in toxicology and CRA; and establishing a network to facilitate research, communication and training in SR methods. We see this paper as a milestone in the creation of a research climate that fosters communication between experts in CRA and SR and facilitates wider uptake of SR methods into CRA.
Resumo:
Background and Aims: Phosphate (Pi) is one of the most limiting nutrients for agricultural production in Brazilian soils due to low soil Pi concentrations and rapid fixation of fertilizer Pi by adsorption to oxidic minerals and/or precipitation by iron and aluminum ions. The objectives of this study were to quantify phosphorus (P) uptake and use efficiency in cultivars of the species Coffea arabica L. and Coffea canephora L., and group them in terms of efficiency and response to Pi availability. Methods: Plants of 21 cultivars of C. arabica and four cultivars of C. canephora were grown under contrasting soil Pi availabilities. Biomass accumulation, tissue P concentration and accumulation and efficiency indices for P use were measured. Key Results: Coffee plant growth was significantly reduced under low Pi availability, and P concentration was higher in cultivars of C. canephora. The young leaves accumulated more P than any other tissue. The cultivars of C. canephora had a higher root/shoot ratio and were significantly more efficient in P uptake, while the cultivars of C. arabica were more efficient in P utilization. Agronomic P use efficiency varied among coffee cultivars and E16 Shoa, E22 Sidamo, Iêmen and Acaiá cultivars were classified as the most efficient and responsive to Pi supply. A positive correlation between P uptake efficiency and root to shoot ratio was observed across all cultivars at low Pi supply. These data identify Coffea genotypes better adapted to low soil Pi availabilities, and the traits that contribute to improved P uptake and use efficiency. These data could be used to select current genotypes with improved P uptake or utilization efficiencies for use on soils with low Pi availability and also provide potential breeding material and targets for breeding new cultivars better adapted to the low Pi status of Brazilian soils. This could ultimately reduce the use of Pi fertilizers in tropical soils, and contribute to more sustainable coffee production.
Resumo:
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.
Resumo:
Kraft pulp is currently bleached largely by the elemental chlorine free (ECF) technology with oxygen, chlorine dioxide, and hydrogen as active agents. This technology brought about significant environmental improvements in relation to standard processes based on chlorine gas and hypochlorite, but there is still need for further improvements. This study presents a novel environmentally friendly bleaching stage - the so-called `hydrogen peroxide in supercritical carbon dioxide`, P((SC-CO2)) - that can be adapted to current ECF bleaching processes, with preference in cases where hydrogen peroxide is already used. In this study, the P((SC-CO2)) stage was evaluated as a replacement to the last peroxide stage of the D(EP)DP bleaching sequence and to the first peroxide stage of the D(EP)DP sequence, for an oxygen delignified eucalypt kraft-O(2) pulp. The P((SC-CO2)) stage was run with 0.5% hydrogen peroxide, at 15% consistency, 70 degrees C, and 73 bar. The reaction time was 30 min. The performances of regular P stages and the new P((SC-CO2)) stage were compared. Promising results were observed with the DEP((SC-CO2))DP sequence; the P((SC-CO2)) decreased kappa number from 2.7 to 2.1, and the hexenuronic acid groups from 17.0 to 12.4 mmol kg(-1). The P((SC-CO2)) stage showed poor performance when applied in the D(EP)DP((SC-CO2)) sequence. It is concluded that the process presents potential but requires further optimization to improve selectivity and efficiency.
Resumo:
This paper presents semiparametric estimators of changes in inequality measures of a dependent variable distribution taking into account the possible changes on the distributions of covariates. When we do not impose parametric assumptions on the conditional distribution of the dependent variable given covariates, this problem becomes equivalent to estimation of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. The distributional impacts of a treatment will be calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here Inequality Treatment Effects (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the inverse probability weighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are computed. Root-N consistency, asymptotic normality and semiparametric efficiency are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper. We also apply our method to the evaluation of a job training program.