43 resultados para Entropy of a sampling design
em University of Queensland eSpace - Australia
Resumo:
Objective: To describe and analyse the study design and manuscript deficiencies in original research articles submitted to Emergency Medicine. Methods: This was a retrospective, analytical study. Articles were enrolled if the reports of the Section Editor and two reviewers were available. Data were extracted from these reports only. Outcome measures were the mean number and nature of the deficiencies and the mean reviewers’ assessment score. Results: Fifty-seven articles were evaluated (28 accepted for publication, 19 rejected, 10 pending revision). The mean (± SD) number of deficiencies was 18.1 ± 6.9, 16.4 ± 6.5 and 18.4 ± 6.7 for all articles, articles accepted for publication and articles rejected, respectively (P = 0.31 between accepted and rejected articles). The mean assessment scores (0–10) were 5.5 ± 1.5, 5.9 ± 1.5 and 4.7 ± 1.4 for all articles, articles accepted for publication and articles rejected, respectively. Accepted articles had a significantly higher assessment score than rejected articles (P = 0.006). For each group, there was a negative correlation between the number of deficiencies and the mean assessment score (P > 0.05). Significantly more rejected articles ‘… did not further our knowledge’ (P = 0.0014) and ‘… did not describe background information adequately’ (P = 0.049). Many rejected articles had ‘… findings that were not clinically or socially significant’ (P = 0.07). Common deficiencies among all articles included ambiguity of the methods (77%) and results (68%), conclusions not warranted by the data (72%), poor referencing (56%), inadequate study design description (51%), unclear tables (49%), an overly long discussion (49%), limitations of the study not described (51%), inadequate definition of terms (49%) and subject selection bias (40%). Conclusions: Researchers should undertake studies that are likely to further our knowledge and be clinically or socially significant. Deficiencies in manuscript preparation are more frequent than mistakes in study design and execution. Specific training or assistance in manuscript preparation is indicated.
Resumo:
Radar target identification based on complex natural resonances is sometimes achieved by convolving a linear time-domain filter with a received target signature. The filter is constructed from measured or pre-calculated target resonances. The performance of the target identification procedure is degraded if the difference between the sampling rates of the target signature and the filter is ignored. The problem is investigated for the natural extinction pulse technique (E-pulse) for the case of identifying stick models of aircraft.
Resumo:
Socioeconomic considerations should have an important place in reserve design, Systematic reserve-selection tools allow simultaneous optimization for ecological objectives while minimizing costs but are seldom used to incorporate socioeconomic costs in the reserve-design process. The sensitivity of this process to biodiversity data resolution has been studied widely but the issue of socioeconomic data resolution has not previously been considered. We therefore designed marine reserves for biodiversity conservation with the constraint of minimizing commercial fishing revenue losses and investigated how economic data resolution affected the results. Incorporating coarse-resolution economic data from official statistics generated reserves that were only marginally less costly to the fishery than those designed with no attempt to minimize economic impacts. An intensive survey yielded fine-resolution data that, when incorporated in the design process, substantially reduced predicted fishery losses. Such an approach could help minimize fisher displacement because the least profitable grounds are selected for the reserve. Other work has shown that low-resolution biodiversity data can lead to underestimation of the conservation value of some sites, and a risk of overlooking the most valuable areas, and we have similarly shown that low-resolution economic data can cause underestimation of the profitability of some sites and a risk of inadvertently including these in the reserve. Detailed socioeconomic data are therefore an essential input for the design of cost-effective reserve networks.
Resumo:
The goal of this manuscript is to introduce a framework for consideration of designs for population pharmacokinetic orpharmacokinetic-pharmacodynamic studies. A standard one compartment pharmacokinetic model with first-order input and elimination is considered. A series of theoretical designs are considered that explore the influence of optimizing the allocation of sampling times, allocating patients to elementary designs, consideration of sparse sampling and unbalanced designs and also the influence of single vs. multiple dose designs. It was found that what appears to be relatively sparse sampling (less blood samples per patient than the number of fixed effects parameters to estimate) can also be highly informative. Overall, it is evident that exploring the population design space can yield many parsimonious designs that are efficient for parameter estimation and that may not otherwise have been considered without the aid of optimal design theory.
Resumo:
A variable that appears to affect preference development is the exposure to a variety of options. Providing opportunities for systematically sampling different options is one procedure that can facilitate the development of preference, which is indicated by the consistency of selections. The purpose of this study was to evaluate the effects of providing sampling opportunities on the preference development for two adults with severe disabilities. Opportunities for sampling a variety of drink items were presented, followed by choice opportunities for selections at the site where sampling occurred and at a non-sampling site (a grocery store). Results show that the participants developed a definite response consistency in selections at both sites. Implications for sampling practices are discussed.
Resumo:
Cases of high-sided vehicles striking low bridges is a large problem in many countries, especially the UK. This paper describes an experiment to evaluate a new design of markings for low bridges. A full size bridge was constructed which was capable of having its overhead clearance adjusted. Subjects sat in a truck cab as. it drove towards the bridge and were asked to judge whether the vehicle could pass safely under the bridge. The main objective of the research, was to determine whether marking the bridge with a newly devised experimental marking would result in more cautious decisions from subjects regarding whether or not the experimental bridge structure could be passed under safely compared with the currently used UK bridge marking standard. The results show that the type of bridge marking influenced the level of caution associated with decisions regarding bridge navigation, with the new marking design producing the most cautious decisions for the two different bridge heights used, at all distances away from the bridge structure. Additionally, the distance before the bridge at which decisions were given had an effect on the level of caution associated with decisions regarding bridge navigation (the closer to the bridge, the more cautious the decisions became, irrespective of the marking design). The implications of these results for reducing the number of bridge strikes are discussed. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This study used faecal pellets to investigate the broadscale distribution and diet of koalas in the mulgalands biogeographic region of south-west Queensland. Koala distribution was determined by conducting faecal pellet searches within a 30-cm radius of the base of eucalypts on 149 belt transects, located using a multi-scaled stratified sampling design. Cuticular analysis of pellets collected from 22 of these sites was conducted to identify the dietary composition of koalas within the region. Our data suggest that koala distribution is concentrated in the northern and more easterly regions of the study area, and appears to be strongly linked with annual rainfall. Over 50% of our koala records were obtained from non-riverine communities, indicating that koalas in the study area are not primarily restricted to riverine communities, as has frequently been suggested. Cuticular analysis indicates that more than 90% of koala diet within the region consists of five eucalypt species. Our data highlights the importance of residual Tertiary landforms to koala conservation in the region.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.
Resumo:
We introduce a novel way of measuring the entropy of a set of values undergoing changes. Such a measure becomes useful when analyzing the temporal development of an algorithm designed to numerically update a collection of values such as artificial neural network weights undergoing adjustments during learning. We measure the entropy as a function of the phase-space of the values, i.e. their magnitude and velocity of change, using a method based on the abstract measure of entropy introduced by the philosopher Rudolf Carnap. By constructing a time-dynamic two-dimensional Voronoi diagram using Voronoi cell generators with coordinates of value- and value-velocity (change of magnitude), the entropy becomes a function of the cell areas. We term this measure teleonomic entropy since it can be used to describe changes in any end-directed (teleonomic) system. The usefulness of the method is illustrated when comparing the different approaches of two search algorithms, a learning artificial neural network and a population of discovering agents. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The published requirements for accurate measurement of heat transfer at the interface between two bodies have been reviewed. A strategy for reliable measurement has been established, based on the depth of the temperature sensors in the medium, on the inverse method parameters and on the time response of the sensors. Sources of both deterministic and stochastic errors have been investigated and a method to evaluate them has been proposed, with the help of a normalisation technique. The key normalisation variables are the duration of the heat input and the maximum heat flux density. An example of application of this technique in the field of high pressure die casting is demonstrated. The normalisation study, coupled with previous determination of the heat input duration, makes it possible to determine the optimum location for the sensors, along with an acceptable sampling rate and the thermocouples critical response-time (as well as eventual filter characteristics). Results from the gauge are used to assess the suitability of the initial design choices. In particular the unavoidable response time of the thermocouples is estimated by comparison with the normalised simulation. (c) 2006 Elsevier Ltd. All rights reserved.