795 resultados para Reliability allocation
Resumo:
The human connectome has recently become a popular research topic in neuroscience, and many new algorithms have been applied to analyze brain networks. In particular, network topology measures from graph theory have been adapted to analyze network efficiency and 'small-world' properties. While there has been a surge in the number of papers examining connectivity through graph theory, questions remain about its test-retest reliability (TRT). In particular, the reproducibility of structural connectivity measures has not been assessed. We examined the TRT of global connectivity measures generated from graph theory analyses of 17 young adults who underwent two high-angular resolution diffusion (HARDI) scans approximately 3 months apart. Of the measures assessed, modularity had the highest TRT, and it was stable across a range of sparsities (a thresholding parameter used to define which network edges are retained). These reliability measures underline the need to develop network descriptors that are robust to acquisition parameters.
Resumo:
The development of Electric Energy Storage (EES) integrated with Renewable Energy Resources (RER) has increased use of optimum scheduling strategy in distribution systems. Optimum scheduling of EES can reduce cost of purchased energy by retailers while improve the reliability of customers in distribution system. This paper proposes an optimum scheduling strategy for EES and the evaluation of its impact on reliability of distribution system. Case study shows the impact of the proposed strategy on reliability indices of a distribution system.
Resumo:
This research investigates how to obtain accurate and reliable positioning results with global navigation satellite systems (GNSS). The work provides a theoretical framework for reliability control in GNSS carrier phase ambiguity resolution, which is the key technique for precise GNSS positioning in centimetre levels. The proposed approach includes identification and exclusion procedures of unreliable solutions and hypothesis tests, allowing the reliability of solutions to be controlled in the aspects of mathematical models, integer estimation and ambiguity acceptance tests. Extensive experimental results with both simulation and observed data sets effectively demonstrate the reliability performance characteristics based on the proposed theoretical framework and procedures.
Resumo:
Objective Evaluation of scapular posture is an integral component of the clinical assessment of painful neck disorders. The aim of this study was to evaluate agreement between therapist judgements of scapula posture in multiple biomechanical planes in individuals with neck pain. Design Inter-therapist reliability study. Setting Research laboratory. Participants Fifteen participants with chronic neck pain. Main outcome measures Four physiotherapists recorded ratings of scapular orientation (relative to the thorax) in five different scapula postural planes (plane of scapula, sagittal plane, transverse plane, horizontal plane, and vertical plane) under four test conditions (at rest, and during three isometric shoulder conditions) in all participants. Inter-therapist reliability was expressed using both generalized and paired kappa coefficient. Results Following adjustment for expected agreement and the high prevalence of neutral ratings (81%), on average both the generalised kappa (0.37) as well as Cohen's Kappa for the two therapist pairs (0.45 and 0.42) demonstrated only slight to moderate inter-therapist reliability. Conclusions The findings suggest that ratings of scapular posture in individuals with neck pain by visual inspection has only slight to moderate reliability and should only be used in conjunction with other clinical tests when judging scapula function in these patients.
Resumo:
Background: The Simple Shoulder Test (SST-Sp) is a widely used outcome measure. Objective: The purpose of this study was to develop and validate a Spanish-version SST (SST-Sp). Methods: A two-stage observational study was conducted. The SST was initially cross-culturally adapted to Spanish through double forward and backward translation and then validated for its psychometric characteristics. Participants (n = 66) with several shoulder disorders completed the SST-Sp, DASH, VAS and SF-12. The full sample was employed to determine factor structure, internal consistency and concurrent criterion validity. Reliability was determined in the first 24–48 h in a subsample of 21 patients. Results: The SST-Sp showed three factors that explained the 56.1 % of variance, and the internal consistency for each factor was α = 0.738, 0.723 and 0.667, and reliability was ICC = 0.687–0.944. The factor structure was three-dimensional and supported construct validity. Criterion validity determined from the relationship between the SST-Sp and DASH was strong (r = −0.73; p < 0.001) and fair for VAS (r = −0.537; p < 0.001). Relationships between SST-Sp and SF-12 were weak for both physical (r = −0.47; p < 0.001) and mental (r = −0.43; p < 0.001) dimensions. Conclusions: The SST-Sp supports the findings of the original English version as being a valid shoulder outcome measure with similar psychometric properties to the original English version.
Resumo:
Background The Palliative Care Problem Severity Score is a clinician-rated tool to assess problem severity in four palliative care domains (pain, other symptoms, psychological/spiritual, family/carer problems) using a 4-point categorical scale (absent, mild, moderate, severe). Aim To test the reliability and acceptability of the Palliative Care Problem Severity Score. Design: Multi-centre, cross-sectional study involving pairs of clinicians independently rating problem severity using the tool. Setting/participants Clinicians from 10 Australian palliative care services: 9 inpatient units and 1 mixed inpatient/community-based service. Results A total of 102 clinicians participated, with almost 600 paired assessments completed for each domain, involving 420 patients. A total of 91% of paired assessments were undertaken within 2 h. Strength of agreement for three of the four domains was moderate: pain (Kappa = 0.42, 95% confidence interval = 0.36 to 0.49); psychological/spiritual (Kappa = 0.48, 95% confidence interval = 0.42 to 0.54); family/carer (Kappa = 0.45, 95% confidence interval = 0.40 to 0.52). Strength of agreement for the remaining domain (other symptoms) was fair (Kappa = 0.38, 95% confidence interval = 0.32 to 0.45). Conclusion The Palliative Care Problem Severity Score is an acceptable measure, with moderate reliability across three domains. Variability in inter-rater reliability across sites and participant feedback indicate that ongoing education is required to ensure that clinicians understand the purpose of the tool and each of its domains. Raters familiar with the patient they were assessing found it easier to assign problem severity, but this did not improve inter-rater reliability.
Resumo:
Objectives Funding for early career researchers in Australia's largest medical research funding scheme is determined by a competitive peer-review process using a panel of four reviewers. The purpose of this experiment was to appraise the reliability of funding by duplicating applications that were considered by separate grant review panels. Study Design and Methods Sixty duplicate applications were considered by two independent grant review panels that were awarding funding for Australia's National Health and Medical Research Council. Panel members were blinded to which applications were included in the experiment and to whether it was the original or duplicate application. Scores were compared across panels using Bland–Altman plots to determine measures of agreement, including whether agreement would have impacted on actual funding. Results Twenty-three percent of the applicants were funded by both panels and 60 percent were not funded by both, giving an overall agreement of 83 percent [95% confidence interval (CI): 73%, 92%]. The chance-adjusted agreement was 0.75 (95% CI: 0.58, 0.92). Conclusion There was a comparatively high level of agreement when compared with other types of funding schemes. Further experimental research could be used to determine if this higher agreement is due to nature of the application, the composition of the assessment panel, or the characteristics of the applicants.
Resumo:
Cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0001.gif rule [Singh (1975)] has been suggested in the literature for finding approximately optimum strata boundaries for proportional allocation, when the stratification is done on the study variable. This paper shows that for the class of density functions arising from the Wang and Aggarwal (1984) representation of the Lorenz Curve (or DBV curves in case of inventory theory), the cum ./LSTA_A_8828879_O_XML_IMAGES/LSTA_A_8828879_O_ILM0002.gif rule in place of giving approximately optimum strata boundaries, yields exactly optimum boundaries. It is also shown that the conjecture of Mahalanobis (1952) “. . .an optimum or nearly optimum solutions will be obtained when the expected contribution of each stratum to the total aggregate value of Y is made equal for all strata” yields exactly optimum strata boundaries for the case considered in the paper.
Resumo:
Two algorithms that improve upon the sequent-peak procedure for reservoir capacity calculation are presented. The first incorporates storage-dependent losses (like evaporation losses) exactly as the standard linear programming formulation does. The second extends the first so as to enable designing with less than maximum reliability even when allowable shortfall in any failure year is also specified. Together, the algorithms provide a more accurate, flexible and yet fast method of calculating the storage capacity requirement in preliminary screening and optimization models.
Resumo:
For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.
Resumo:
Suppose two treatments with binary responses are available for patients with some disease and that each patient will receive one of the two treatments. In this paper we consider the interests of patients both within and outside a trial using a Bayesian bandit approach and conclude that equal allocation is not appropriate for either group of patients. It is suggested that Gittins indices should be used (using an approach called dynamic discounting by choosing the discount rate based on the number of future patients in the trial) if the disease is rare, and the least failures rule if the disease is common. Some analytical and simulation results are provided.
Resumo:
We explore the use of Gittins indices to search for near optimality in sequential clinical trials. Some adaptive allocation rules are proposed to achieve the following two objectives as far as possible: (i) to reduce the expected successes lost, (ii) to minimize the error probability at the end. Simulation results indicate the merits of the rules based on Gittins indices for small trial sizes. The rules are generalized to the case when neither of the response densities is known. Asymptotic optimality is derived for the constrained rules. A simple allocation rule is recommended for one-stage models. The simulation results indicate that it works better than both equal allocation and Bather's randomized allocation. We conclude with a discussion of possible further developments.
Resumo:
My thesis examined an alternative approach, referred to as the unitary taxation approach to the allocation of profit, which arises from the notion that as a multinational group exists as a single economic entity, it should be taxed as one taxable unit. The plausibility of a unitary taxation regime achieving international acceptance and agreement is highly contestable due to its implementation issues, and economic and political feasibility. Using a case-study approach focusing on Freeport-McMoRan and Rio Tinto's mining operations in Indonesia, this thesis compares both tax regimes against the criteria for a good tax system - equity, efficiency, neutrality and simplicity. This thesis evaluates key issues that arise when implementing a unitary taxation approach with formulary apportionment based on the context of mining multinational firms in Indonesia.
Resumo:
In recent years many sorghum producers in the more marginal (<600 mm annual rainfall) cropping areas of Qld and northern NSW have utilised skip row configurations in an attempt to improve yield reliability and reduce sorghum production risk. But will this work in the long run? What are the trade-offs between productivity and risk of crop failure? This paper describes a modelling and simulation approach to study the long-term effects of skip row configurations. Detailed measurements of light interception and water extraction from sorghum crops grown in solid, single and double skip row configurations were collected from three on-farm participatory research trials established in southern Qld and northern NSW. These measurements resulted in changes to the model that accounted for the elliptical water uptake pattern below the crop row and reduced total light interception associated with the leaf area reduction of the skip configuration. Following validation of the model, long-term simulation runs using historical weather data were used to determine the value of skip row sorghum production as a means of maintaining yield reliability in the dryland cropping regions of southern Qld and northern NSW.
Resumo:
Reliability of supply of feed grain has become a high priority issue for industry in the northern region. Expansion by major intensive livestock and industrial users of grain, combined with high inter-annual variability in seasonal conditions, has generated concern in the industry about reliability of supply. This paper reports on a modelling study undertaken to analyse the reliability of supply of feed grain in the northern region. Feed grain demand was calculated for major industries (cattle feedlots, pigs, poultry, dairy) based on their current size and rate of grain usage. Current demand was estimated to be 2.8Mt. With the development of new industrial users (ethanol) and by projecting the current growth rate of the various intensive livestock industries, it was estimated that demand would grow to 3.6Mt in three years time. Feed grain supply was estimated using shire scale yield prediction models for wheat and sorghum that had been calibrated against recent ABS production data. Other crops that contribute to a lesser extent to the total feed grain pool (barley, maize) were included by considering their production relative to the major winter and summer grains, with estimates based on available production records. This modelling approach allowed simulation of a 101-year time series of yield that showed the extent of the impact of inter-annual climate variability on yield levels. Production estimates were developed from this yield time series by including planted crop area. Area planted data were obtained from ABS and ABARE records. Total production amounts were adjusted to allow for any export and end uses that were not feed grain (flour, malt etc). The median feed grain supply for an average area planted was about 3.1Mt, but this varied greatly from year to year depending on seasonal conditions and area planted. These estimates indicated that supply would not meet current demand in about 30% of years if a median area crop were planted. Two thirds of the years with a supply shortfall were El Nino years. This proportion of years was halved (i.e. 15%) if the area planted increased to that associated with the best 10% of years. Should demand grow as projected in this study, there would be few years where it could be met if a median crop area was planted. With area planted similar to the best 10% of years, there would still be a shortfall in nearly 50% of all years (and 80% of El Nino years). The implications of these results on supply/demand and risk management and investment in research and development are briefly discussed.