981 resultados para Sequential error ratio


Relevância:

20.00% 20.00%

Publicador:

Resumo:

New classification criteria for axial spondyloarthritis have been developed with the goal of increasing sensitivity of criteria for early inflammatory spondyloarthritis. However these criteria substantially increase heterogeneity of the resulting disease group, reducing their value in both research and clinical settings. Further research to establish criteria based on better knowledge of the natural history of non-radiographic axial spondyloarthritis, its aetiopathogenesis and response to treatment is required. In the meantime the modified New York criteria for ankylosing spondylitis remain a very useful classification criteria set, defining a relatively homogenous group of cases for clinical use and research studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methacrylate-based hydrogels, such as homo- and copolymers of 2-hydroxyethyl methacrylate (HEMA), have demonstrated significant potential for use in biomedical applications. However, many of these hydrogels tend to resist cell attachment and growth at their surfaces, which can be detrimental for certain applications. In this article, glycidyl methacrylate (GMA) was copolymerized with HEMA to generate gels functionalized with epoxide groups. The epoxides were then functionalized by two sequential click reactions, namely, nucleophilic ring opening of epoxides with sodium azide and then coupling of small molecules and peptides via Huisgen's copper catalyzed 1,3-dipolar cycloaddition of azides with alkynes. Using this strategy it was possible to control the degree of functionalization by controlling the feed ratio of monomers during polymerization. In vitro cell culture of human retinal pigment epithelial cell line (ARPE-19) with the hydrogels showed improved cell adhesion, growth and proliferation for hydrogels that were functionalized with a peptide containing the RGD sequence. In addition, the cell attachment progressively decreased with increasing densities of the RGD containing peptide. In summary, a facile methodology has been presented that gives rise to hydrogels with controlled degrees of functionality, such that the cell response is directly related to the levels and nature of that functionality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Haemodialysis nurses form long term relationships with patients in a technologically complex work environment. Previous studies have highlighted that haemodialysis nurses face stressors related to the nature of their work and also their work environments leading to reported high levels of burnout. Using Kanters (1997) Structural Empowerment Theory as a guiding framework, the aim of this study was to explore the factors contributing to satisfaction with the work environment, job satisfaction, job stress and burnout in haemodialysis nurses. Methods: Using a sequential mixed-methods design, the first phase involved an on-line survey comprising demographic and work characteristics, Brisbane Practice Environment Measure (B-PEM), Index of Work Satisfaction(IWS), Nursing Stress Scale (NSS) and the Maslach Burnout Inventory (MBI). The second phase involved conducting eight semi-structured interviews with data thematically analyzed. Results: From the 417 nurses surveyed the majority were female (90.9 %), aged over 41 years of age (74.3 %), and 47.4 % had worked in haemodialysis for more than 10 years. Overall the work environment was perceived positively and there was a moderate level of job satisfaction. However levels of stress and emotional exhaustion (burnout) were high. Two themes, ability to care and feeling successful as a nurse, provided clarity to the level of job satisfaction found in phase 1. While two further themes, patients as quasi-family and intense working teams, explained why working as a haemodialysis nurse was both satisfying and stressful. Conclusions: Nurse managers can use these results to identify issues being experienced by haemodialysis nurses working in the unit they are supervising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Suppose two treatments with binary responses are available for patients with some disease and that each patient will receive one of the two treatments. In this paper we consider the interests of patients both within and outside a trial using a Bayesian bandit approach and conclude that equal allocation is not appropriate for either group of patients. It is suggested that Gittins indices should be used (using an approach called dynamic discounting by choosing the discount rate based on the number of future patients in the trial) if the disease is rare, and the least failures rule if the disease is common. Some analytical and simulation results are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An adaptive learning scheme, based on a fuzzy approximation to the gradient descent method for training a pattern classifier using unlabeled samples, is described. The objective function defined for the fuzzy ISODATA clustering procedure is used as the loss function for computing the gradient. Learning is based on simultaneous fuzzy decisionmaking and estimation. It uses conditional fuzzy measures on unlabeled samples. An exponential membership function is assumed for each class, and the parameters constituting these membership functions are estimated, using the gradient, in a recursive fashion. The induced possibility of occurrence of each class is useful for estimation and is computed using 1) the membership of the new sample in that class and 2) the previously computed average possibility of occurrence of the same class. An inductive entropy measure is defined in terms of induced possibility distribution to measure the extent of learning. The method is illustrated with relevant examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore the use of Gittins indices to search for near optimality in sequential clinical trials. Some adaptive allocation rules are proposed to achieve the following two objectives as far as possible: (i) to reduce the expected successes lost, (ii) to minimize the error probability at the end. Simulation results indicate the merits of the rules based on Gittins indices for small trial sizes. The rules are generalized to the case when neither of the response densities is known. Asymptotic optimality is derived for the constrained rules. A simple allocation rule is recommended for one-stage models. The simulation results indicate that it works better than both equal allocation and Bather's randomized allocation. We conclude with a discussion of possible further developments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Methods Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Results Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Conclusions Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying children with CI and reduced visual information processing given the potential effect of these conditions on school performance