948 resultados para best estimate method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

To more precisely formulate feed and predict animal performance, it is important to base both the recommendations and feed formulations on digestible rather than total amino acid contents. Most published data on the digestibility of amino acids in feed ingredients for poultry are based on excreta digestibility. Ileal digestibility is an alternative and preferred approach to estimate amino acid availability in feed ingredients. Both methodologies are described and assessed. In addition, the differences between apparent and standardised (in which corrections are made for basal endogenous losses) digestible amino acid systems are discussed. The concept of a standardised digestibility system as a mean of overcoming the limitations of apparent digestibility estimates is proposed. In this context, different methodologies for the determination of basal endogenous amino acid losses are discussed. Although each methodology suffers from some limitations and published data on endogenous losses at the ileal level in growing poultry are limited, averaged data from repeated experiments using the 'enzymatically hydrolysed casein' method are considered as the best measure of basal losses. Standardised ileal amino acid digestibility values of 17 feed ingredients commonly used in broiler nutrition are presented including grains (barley, corn, sorghum, triticale, wheat), grain by-products (wheat middlings, rice pollard), plant protein sources (soybean meal, canola meal, corn gluten meal, cottonseed meal, lupins, peas/beans, sunflower meal), and animal by-products (feather meal, fish meal, meat and bone meal). This comprehensive set of the ileal amino acid digestibility of feed ingredients in broiler nutrition may serve as a basis for the establishment of the system in broiler feeding and for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measurement of lifetime prevalence of depression in cross-sectional surveys is biased by recall problems. We estimated it indirectly for two countries using modelling, and quantified the underestimation in the empirical estimate for one. A microsimulation model was used to generate population-based epidemiological measures of depression. We fitted the model to 1-and 12-month prevalence data from the Netherlands Mental Health Survey and Incidence Study (NEMESIS) and the Australian Adult Mental Health and Wellbeing Survey. The lowest proportion of cases ever having an episode in their life is 30% of men and 40% of women, for both countries. This corresponds to a lifetime prevalence of 20 and 30%, respectively, in a cross-sectional setting (aged 15-65). The NEMESIS data were 38% lower than these estimates. We conclude that modelling enabled us to estimate lifetime prevalence of depression indirectly. This method is useful in the absence of direct measurement, but also showed that direct estimates are underestimated by recall bias and by the cross-sectional setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the English literature, facial approximation methods have been commonly classified into three types: Russian, American, or Combination. These categorizations are based on the protocols used, for example, whether methods use average soft-tissue depths (American methods) or require face muscle construction (Russian methods). However, literature searches outside the usual realm of English publications reveal key papers that demonstrate that the Russian category above has been founded on distorted views. In reality, Russian methods are based on limited face muscle construction, with heavy reliance on modified average soft-tissue depths. A closer inspection of the American method also reveals inconsistencies with the recognized classification scheme. This investigation thus demonstrates that all major methods of facial approximation depend on both face anatomy and average soft-tissue depths, rendering common method classification schemes redundant. The best way forward appears to be for practitioners to describe the methods they use (including the weight each one gives to average soft-tissue depths and deep face tissue construction) without placing them in any categorical classificatory group or giving them an ambiguous name. The state of this situation may need to be reviewed in the future in light of new research results and paradigms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the aim of conservation planning is the persistence of biodiversity, current methods trade-off ecological realism at a species level in favour of including multiple species and landscape features. For conservation planning to be relevant, the impact of landscape configuration on population processes and the viability of species needs to be considered. We present a novel method for selecting reserve systems that maximize persistence across multiple species, subject to a conservation budget. We use a spatially explicit metapopulation model to estimate extinction risk, a function of the ecology of the species and the amount, quality and configuration of habitat. We compare our new method with more traditional, area-based reserve selection methods, using a ten-species case study, and find that the expected loss of species is reduced 20-fold. Unlike previous methods, we avoid designating arbitrary weightings between reserve size and configuration; rather, our method is based on population processes and is grounded in ecological theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Early detection and treatment of mental disorders in adolescents and young adults can lead to better health outcomes. Mental health literacy is a key to early recognition and help seeking. Whilst a number of population health initiatives have attempted to improve mental health literacy, none to date have specifically targeted young people nor have they applied the rigorous standards of population health models now accepted as best practice in other health areas. This paper describes the outcomes from the application of a health promotion model to the development, implementation and evaluation of a community awareness campaign designed to improve mental health literacy and early help seeking amongst young people. Method: The Compass Strategy was implemented in the western metropolitan Melbourne and Barwon regions of Victoria, Australia. The Precede-Proceed Model guided the population assessment, campaign strategy development and evaluation. The campaign included the use of multimedia, a website, and an information telephone service. Multiple levels of evaluation were conducted. This included a cross-sectional telephone survey of mental health literacy undertaken before and after 14 months of the campaign using a quasi-experimental design. Randomly selected independent samples of 600 young people aged 12 - 25 years from the experimental region and another 600 from a comparison region were interviewed at each time point. A series of binary logistic regression analyses were used to measure the association between a range of campaign outcome variables and the predictor variables of region and time. Results: The program was judged to have an impact on the following variables, as indicated by significant region-by-time interaction effects ( p < 0.05): awareness of mental health campaigns, self-identified depression, help for depression sought in the previous year, correct estimate of prevalence of mental health problems, increased awareness of suicide risk, and a reduction in perceived barriers to help seeking. These effects may be underestimated because media distribution error resulted in a small amount of print material leaking into the comparison region. Conclusion: We believe this is the first study to apply the rigorous standards of a health promotion model including the use of a control region to a mental health population intervention. The program achieved many of its aims despite the relatively short duration and moderate intensity of the campaign.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Injecting drug use (IDU) and associated mortality appear to be increasing in many parts of the world. IDU is an important factor in HIV transmission. In estimating AIDS mortality attributable to IDU, it is important to take account of premature mortality rates from other causes to ensure that AIDS related mortality among injecting drug users (IDUs) is not overestimated. The current review provides estimates of the excess non-AIDS mortality among IDUs. Method: Searches were conducted with Medline, PsycINFO, and the Web of Science. The authors also searched reference lists of identified papers and an earlier literature review by English et al (1995). Crude. mortality rates (CMRs) were derived from data on the number of deaths, period of follow UP, and number of participants. In estimating the all-cause mortality, two rates were calculated: one that included all cohort studies identified in the search, and one that only included studies that reported on AIDS deaths in their cohort. This provided lower and upper mortality rates, respectively. Results: The current paper derived weighted mortality rates based upon cohort studies that included 179 885 participants, 1 219 422 person-years of observation, and 16 593 deaths. The weighted crude AIDS mortality rate from studies that reported AIDS deaths was approximately 0.78% per annum. The median estimated non-AIDS mortality rate was 1.08% per annum. Conclusions: Illicit drug users have a greatly increased risk of premature death and mortality due to AIDS forms a significant part of that increased risk; it is, however, only part of that risk. Future work needs to examine mortality rates among IDUs in developing countries, and collect data on the relation between HIV and increased mortality due to all causes among this group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of systematic conservation planning tools are available to aid in making land use decisions. Given the increasing worldwide use and application of reserve design tools, including measures of site irreplaceability, it is essential that methodological differences and their potential effect on conservation planning outcomes are understood. We compared the irreplaceability of sites for protecting ecosystems within the Brigalow Belt Bioregion, Queensland, Australia, using two alternative reserve system design tools, Marxan and C-Plan. We set Marxan to generate multiple reserve systems that met targets with minimal area; the first scenario ignored spatial objectives, while the second selected compact groups of areas. Marxan calculates the irreplaceability of each site as the proportion of solutions in which it occurs for each of these set scenarios. In contrast, C-Plan uses a statistical estimate of irreplaceability as the likelihood that each site is needed in all combinations of sites that satisfy the targets. We found that sites containing rare ecosystems are almost always irreplaceable regardless of the method. Importantly, Marxan and C-Plan gave similar outcomes when spatial objectives were ignored. Marxan with a compactness objective defined twice as much area as irreplaceable, including many sites with relatively common ecosystems. However, targets for all ecosystems were met using a similar amount of area in C-Plan and Marxan, even with compactness. The importance of differences in the outcomes of using the two methods will depend on the question being addressed; in general, the use of two or more complementary tools is beneficial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we examine the problem of fitting a hypersphere to a set of noisy measurements of points on its surface. Our work generalises an estimator of Delogne (Proc. IMEKO-Symp. Microwave Measurements 1972,117-123) which he proposed for circles and which has been shown by Kasa (IEEE Trans. Instrum. Meas. 25, 1976, 8-14) to be convenient for its ease of analysis and computation. We also generalise Chan's 'circular functional relationship' to describe the distribution of points. We derive the Cramer-Rao lower bound (CRLB) under this model and we derive approximations for the mean and variance for fixed sample sizes when the noise variance is small. We perform a statistical analysis of the estimate of the hypersphere's centre. We examine the existence of the mean and variance of the estimator for fixed sample sizes. We find that the mean exists when the number of sample points is greater than M + 1, where M is the dimension of the hypersphere. The variance exists when the number of sample points is greater than M + 2. We find that the bias approaches zero as the noise variance diminishes and that the variance approaches the CRLB. We provide simulation results to support our findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Oral itraconazole (ITRA) is used for the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF) because of its antifungal activity against Aspergillus species. ITRA has an active hydroxy-metabolite (OH-ITRA) which has similar antifungal activity. ITRA is a highly lipophilic drug which is available in two different oral formulations, a capsule and an oral solution. It is reported that the oral solution has a 60% higher relative bioavailability. The influence of altered gastric physiology associated with CF on the pharmacokinetics (PK) of ITRA and its metabolite has not been previously evaluated. Objectives: 1) To estimate the population (pop) PK parameters for ITRA and its active metabolite OH-ITRA including relative bioavailability of the parent after administration of the parent by both capsule and solution and 2) to assess the performance of the optimal design. Methods: The study was a cross-over design in which 30 patients received the capsule on the first occasion and 3 days later the solution formulation. The design was constrained to have a maximum of 4 blood samples per occasion for estimation of the popPK of both ITRA and OH-ITRA. The sampling times for the population model were optimized previously using POPT v.2.0.[1] POPT is a series of applications that run under MATLAB and provide an evaluation of the information matrix for a nonlinear mixed effects model given a particular design. In addition it can be used to optimize the design based on evaluation of the determinant of the information matrix. The model details for the design were based on prior information obtained from the literature, which suggested that ITRA may have either linear or non-linear elimination. The optimal sampling times were evaluated to provide information for both competing models for the parent and metabolite and for both capsule and solution simultaneously. Blood samples were assayed by validated HPLC.[2] PopPK modelling was performed using FOCE with interaction under NONMEM, version 5 (level 1.1; GloboMax LLC, Hanover, MD, USA). The PK of ITRA and OH‑ITRA was modelled simultaneously using ADVAN 5. Subsequently three methods were assessed for modelling concentrations less than the LOD (limit of detection). These methods (corresponding to methods 5, 6 & 4 from Beal[3], respectively) were (a) where all values less than LOD were assigned to half of LOD, (b) where the closest missing value that is less than LOD was assigned to half the LOD and all previous (if during absorption) or subsequent (if during elimination) missing samples were deleted, and (c) where the contribution of the expectation of each missing concentration to the likelihood is estimated. The LOD was 0.04 mg/L. The final model evaluation was performed via bootstrap with re-sampling and a visual predictive check. The optimal design and the sampling windows of the study were evaluated for execution errors and for agreement between the observed and predicted standard errors. Dosing regimens were simulated for the capsules and the oral solution to assess their ability to achieve ITRA target trough concentration (Cmin,ss of 0.5-2 mg/L) or a combined Cmin,ss for ITRA and OH-ITRA above 1.5mg/L. Results and Discussion: A total of 241 blood samples were collected and analysed, 94% of them were taken within the defined optimal sampling windows, of which 31% where taken within 5 min of the exact optimal times. Forty six per cent of the ITRA values and 28% of the OH-ITRA values were below LOD. The entire profile after administration of the capsule for five patients was below LOD and therefore the data from this occasion was omitted from estimation. A 2-compartment model with 1st order absorption and elimination best described ITRA PK, with 1st order metabolism of the parent to OH-ITRA. For ITRA the clearance (ClItra/F) was 31.5 L/h; apparent volumes of central and peripheral compartments were 56.7 L and 2090 L, respectively. Absorption rate constants for capsule (kacap) and solution (kasol) were 0.0315 h-1 and 0.125 h-1, respectively. Comparative bioavailability of the capsule was 0.82. There was no evidence of nonlinearity in the popPK of ITRA. No screened covariate significantly improved the fit to the data. The results of the parameter estimates from the final model were comparable between the different methods for accounting for missing data, (M4,5,6)[3] and provided similar parameter estimates. The prospective application of an optimal design was found to be successful. Due to the sampling windows, most of the samples could be collected within the daily hospital routine, but still at times that were near optimal for estimating the popPK parameters. The final model was one of the potential competing models considered in the original design. The asymptotic standard errors provided by NONMEM for the final model and empirical values from bootstrap were similar in magnitude to those predicted from the Fisher Information matrix associated with the D-optimal design. Simulations from the final model showed that the current dosing regimen of 200 mg twice daily (bd) would provide a target Cmin,ss (0.5-2 mg/L) for only 35% of patients when administered as the solution and 31% when administered as capsules. The optimal dosing schedule was 500mg bd for both formulations. The target success for this dosing regimen was 87% for the solution with an NNT=4 compared to capsules. This means, for every 4 patients treated with the solution one additional patient will achieve a target success compared to capsule but at an additional cost of AUD $220 per day. The therapeutic target however is still doubtful and potential risks of these dosing schedules need to be assessed on an individual basis. Conclusion: A model was developed which described the popPK of ITRA and its main active metabolite OH-ITRA in adult CF after administration of both capsule and solution. The relative bioavailability of ITRA from the capsule was 82% that of the solution, but considerably more variable. To incorporate missing data, using the simple Beal method 5 (using half LOD for all samples below LOD) provided comparable results to the more complex but theoretically better Beal method 4 (integration method). The optimal sparse design performed well for estimation of model parameters and provided a good fit to the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Medicines reconciliation-identifying and maintaining an accurate list of a patient's current medications-should be undertaken at all transitions of care and available to all patients. Objective A self-completion web survey was conducted for chief pharmacists (or equivalent) to evaluate medicines reconciliation levels in secondary care mental health organisations. Setting The survey was sent to secondary care mental health organisations in England, Scotland, Northern Ireland and Wales. Method The survey was launched via Bristol Online Surveys. Quantitative data was analysed using descriptive statistics and qualitative data was collected through respondents free-text answers to specific questions. Main outcomes measure Investigate how medicines reconciliation is delivered, incorporate a clear description of the role of pharmacy staff and identify areas of concern. Results Forty-two (52 % response rate) surveys were completed. Thirty-seven (88.1 %) organisations have a formal policy for medicines reconciliation with defined steps. Results show that the pharmacy team (pharmacists and pharmacy technicians) are the main professionals involved in medicines reconciliation with a high rate of doctors also involved. Training procedures frequently include an induction by pharmacy for doctors whilst the pharmacy team are generally trained by another member of pharmacy. Mental health organisations estimate that nearly 80 % of medicines reconciliation is carried out within 24 h of admission. A full medicines reconciliation is not carried out on patient transfer between mental health wards; instead quicker and less exhaustive variations are implemented. 71.4 % of organisations estimate that pharmacy staff conduct daily medicine reconciliations for acute admission wards (Monday to Friday). However, only 38 % of organisations self-report to pharmacy reconciling patients' medication for other teams that admit from primary care. Conclusion Most mental health organisations appear to be complying with NICE guidance on medicines reconciliation for their acute admission wards. However, medicines reconciliation is conducted less frequently on other units that admit from primary care and rarely completed on transfer when it significantly differs to that on admission. Formal training and competency assessments on medicines reconciliation should be considered as current training varies and adherence to best practice is questionable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature relating to sieve plate liquid extraction columns and relevant hydrodynamic phenomena have been surveyed. Mass transfer characteristics during drop formation, rise and coalescence, and related models were also reviewed. Important design parameters i.e. flooding, dispersed phase hold-up, drop size distribution, mean drop size, coalescence/flocculation zone height beneath a plate and jetting phenomena were investigated under non-mass transfer and mass transfer conditions in a 0.45m diameter, 2.3m high sieve plate column. This column had provision for four different plate designs, and variable plate spacing and downcomer heights, and the system used was Clairsol `350' (dispersed) - acetone - deionised water (continuous) with either direction of mass transfer. Drop size distributions were best described by the functions proposed by Gal-or, and then Mugele-Evans. Using data from this study and the literature, correlations were developed for dispersed phase hold-up, mean drop size in the preferred jetting regime and in the non-jetting regime, and coalescence zone height. A method to calculate the theoretical overall mass transfer coefficient allowing for the range of drop sizes encountered in the column gave the best fit to experimental data. This applied the drop size distribution diagram to estimate the volume percentage of stagnant, circulating and oscillating drops in the drop population. The overall coefficient Kcal was then calculated as the fractional sum of the predicted individual single drop coefficients and their proportion in the drop population. In a comparison between the experimental and calculated overall mass transfer coefficients for cases in which all the drops were in the oscillating regime (i.e. 6.35mm hole size plate), and for transfer from the dispersed(d) to continuous(c) phase, the film coefficient kd predicted from the Rose-Kintner correlation together with kc from that of Garner-Tayeban gave the best representation. Droplets from the 3.175mm hole size plate, were of a size to be mainly circulating and oscillating; a combination of kd from the Kronig-Brink (circulating) and Rose-Kintner (oscillating) correlations with the respective kc gave the best agreement. The optimum operating conditions for the SPC were identified and a procedure proposed for design from basic single drop data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.