36 resultados para decision analytic model
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
BACKGROUND: Despite vaccines and improved medical intensive care, clinicians must continue to be vigilant of possible Meningococcal Disease in children. The objective was to establish if the procalcitonin test was a cost-effective adjunct for prodromal Meningococcal Disease in children presenting at emergency department with fever without source.
METHODS AND FINDINGS: Data to evaluate procalcitonin, C-reactive protein and white cell count tests as indicators of Meningococcal Disease were collected from six independent studies identified through a systematic literature search, applying PRISMA guidelines. The data included 881 children with fever without source in developed countries.The optimal cut-off value for the procalcitonin, C-reactive protein and white cell count tests, each as an indicator of Meningococcal Disease, was determined. Summary Receiver Operator Curve analysis determined the overall diagnostic performance of each test with 95% confidence intervals. A decision analytic model was designed to reflect realistic clinical pathways for a child presenting with fever without source by comparing two diagnostic strategies: standard testing using combined C-reactive protein and white cell count tests compared to standard testing plus procalcitonin test. The costs of each of the four diagnosis groups (true positive, false negative, true negative and false positive) were assessed from a National Health Service payer perspective. The procalcitonin test was more accurate (sensitivity=0.89, 95%CI=0.76-0.96; specificity=0.74, 95%CI=0.4-0.92) for early Meningococcal Disease compared to standard testing alone (sensitivity=0.47, 95%CI=0.32-0.62; specificity=0.8, 95% CI=0.64-0.9). Decision analytic model outcomes indicated that the incremental cost effectiveness ratio for the base case was £-8,137.25 (US $ -13,371.94) per correctly treated patient.
CONCLUSIONS: Procalcitonin plus standard recommended tests, improved the discriminatory ability for fatal Meningococcal Disease and was more cost-effective; it was also a superior biomarker in infants. Further research is recommended for point-of-care procalcitonin testing and Markov modelling to incorporate cost per QALY with a life-time model.
Resumo:
DESIGN We will address our research objectives by searching the published and unpublished literature and conducting an evidence synthesis of i) studies of the effectiveness of psychosocial interventions provided for children and adolescents who have suffered maltreatment, ii) economic evaluations of these interventions and iii) studies of their acceptability to children, adolescents and their carers. SEARCH STRATEGY: Evidence will be identified via electronic databases for health and allied health literature, social sciences and social welfare, education and other evidence based depositories, and economic databases. We will identify material generated by user-led,voluntary sector enquiry by searching the internet and browsing the websites of relevant UK government departments and charities. Additionally, studies will be identified via the bibliographies of retrieved articles/reviews; targeted author searches; forward citation searching. We will also use our extensive professional networks, and our planned consultations with key stakeholders and our study steering committee. Databases will be searched from inception to time of search. REVIEW STRATEGY Inclusion criteria: 1) Infants, children or adolescents who have experienced maltreatment between the ages of 0 17 years. 2) All psychosocial interventions available for maltreated children and adolescents, by any provider and in any setting, aiming to address the sequelae of any form of maltreatment, including fabricated illness. 3) For synthesis of evidence of effectiveness: all controlled studies in which psychosocial interventions are compared with no-treatment, treatment as usual, waitlist or other-treated controls. For a synthesis of evidence of acceptability we will include any design that asks participants for their views or provides data on non-participation. For decision-analytic modelling we may include uncontrolled studies. Primary and secondary outcomes will be confirmed in consultation with stakeholders. Provisional primary outcomes are psychological distress/mental health (particularly PTSD, depression and anxiety, self-harm); ii) behaviour; iii) social functioning; iv) cognitive / academic attainment, v) quality of life, and vi) costs. After studies that meet the inclusion criteria have been identified (independently by two reviewers), data will be extracted and risk of bias (RoB) assessed (independently by two reviewers) using the Cochrane Collaboration RoB Tool (effectiveness), quality hierarchies of data sources for economic analyses (cost-effectiveness) and the CASP tool for qualitative research (acceptability). Where interventions are similar and appropriate data are available (or can be obtained) evidence synthesis will be performed to pool the results. Where possible, we will explore the extent to which age, maltreatment history (including whether intra- or extra-familial), time since maltreatment, care setting (family / out-of-home care including foster care/residential), care history, and characteristics of intervention (type, setting, provider, duration) moderate the effects of psychosocial interventions. A synthesis of acceptability data will be undertaken, using a narrative approach to synthesis. A decision-analytic model will be constructed to compare the expected cost-effectiveness of the different types of intervention identified in the systematic review. We will also conduct a Value of information analysis if the data permit. EXPECTED OUTPUTS: A synthesis of the effectiveness and cost effectiveness of psychosocial interventions for maltreated children (taking into account age, maltreatment profile and setting) and their acceptability to key stakeholders.
Resumo:
We present the results of the one-year long observational campaign of the type 11 plateau SN 2005cs, which exploded in the nearby spiral galaxy M51 (the Whirlpool galaxy). This extensive data set makes SN 2005cs the best observed low-luminosity, Ni-56-poor type II plateau event so far and one of the best core-collapse supernovae ever. The optical and near-infrared spectra show narrow P-Cygni lines characteristic of this SN family, which are indicative of a very low expansion velocity (about 1000 km s(-1)) of the ejected material. The optical light curves cover both the plateau phase and the late-time radioactive tail, until about 380 d after core-collapse. Numerous unfiltered observations obtained by amateur astronomers give us the rare opportunity to monitor the fast rise to maximum light, lasting about 2 cl. In addition to optical observations, we also present near-infrared light curves that (together with already published ultraviolet observations) allow us to construct for the first time a reliable bolometric light Curve for an object of this class. Finally. comparing the observed data withthose derived front it semi-analytic model, we infer for SN 2005cs a Ni-56 mass of about 3 x 10(-3) M-circle dot a total ejected mass of 8-13 M-circle dot and an explosion energy of about 3 x 10(50) erg.
Resumo:
We present a sample of normal Type Ia supernovae (SNe Ia) from the Nearby Supernova Factory data set with spectrophotometry at sufficiently late phases to estimate the ejected mass using the bolometric light curve.Wemeasure Ni masses from the peak bolometric luminosity, then compare the luminosity in the Co-decay tail to the expected rate of radioactive energy release from ejecta of a given mass. We infer the ejected mass in a Bayesian context using a semi-analytic model of the ejecta, incorporating constraints from contemporary numerical models as priors on the density structure and distribution of Ni throughout the ejecta. We find a strong correlation between ejected mass and light-curve decline rate, and consequently Ni mass, with ejected masses in our data ranging from 0.9 to 1.4 M. Most fast-declining (SALT2 x <-1) normal SNe Ia have significantly sub-Chandrasekhar ejected masses in our fiducial analysis.
Resumo:
Three issues usually are associated with threat prevention intelligent surveillance systems. First, the fusion and interpretation of large scale incomplete heterogeneous information; second, the demand of effectively predicting suspects’ intention and ranking the potential threats posed by each suspect; third, strategies of allocating limited security resources (e.g., the dispatch of security team) to prevent a suspect’s further actions towards critical assets. However, in the literature, these three issues are seldomly considered together in a sensor network based intelligent surveillance framework. To address
this problem, in this paper, we propose a multi-level decision support framework for in-time reaction in intelligent surveillance. More specifically, based on a multi-criteria event modeling framework, we design a method to predict the most plausible intention of a suspect. Following this, a decision support model is proposed to rank each suspect based on their threat severity and to determine resource allocation strategies. Finally, formal properties are discussed to justify our framework.
Resumo:
While the repeated nature of Discrete Choice Experiments is advantageous from a sampling efficiency perspective, patterns of choice may differ across the tasks, due, in part, to learning and fatigue. Using probabilistic decision process models, we find in a field study that learning and fatigue behavior may only be exhibited by a small subset of respondents. Most respondents in our sample show preference and variance stability consistent with rational pre-existent and
well formed preferences. Nearly all of the remainder exhibit both learning and fatigue effects. An important aspect of our approach is that it enables learning and fatigue effects to be explored, even though they were not envisaged during survey design or data collection.
Resumo:
OBJECTIVES: Regular use of nonsteroidal anti-inflammatory drugs (NSAIDs) is associated with a reduced risk of esophageal adenocarcinoma. Epidemiological studies examining the association between NSAID use and the risk of the precursor lesion, Barrett’s esophagus, have been inconclusive.
METHODS: We analyzed pooled individual-level participant data from six case-control studies of Barrett’s esophagus in the Barrett’s and Esophageal Adenocarcinoma Consortium (BEACON). We compared medication use from 1474 patients with Barrett’s esophagus separately with two control groups: 2256 population-based controls and 2018 gastroesophageal reflux disease (GERD) controls. Study-specific odds ratios (OR) and 95% confidence intervals (CI) were estimated using multivariable logistic regression models and were combined using a random effects meta-analytic model.
RESULTS: Regular (at least once weekly) use of any NSAIDs was not associated with the risk of Barrett’s esophagus (vs. population-based controls, adjusted OR = 1.00, 95% CI = 0.76–1.32; I2=61%; vs. GERD controls, adjusted OR = 0.99, 95% CI = 0.82–1.19; I2=19%). Similar null findings were observed among individuals who took aspirin or non-aspirin NSAIDs. We also found no association with highest levels of frequency (at least daily use) and duration (≥5 years) of NSAID use. There was evidence of moderate between-study heterogeneity; however, associations with NSAID use remained non-significant in “leave-one-out” sensitivity analyses.
CONCLUSIONS: Use of NSAIDs was not associated with the risk of Barrett’s esophagus. The previously reported inverse association between NSAID use and esophageal adenocarcinoma may be through reducing the risk of neoplastic progression in patients with Barrett’s esophagus.
Resumo:
We describe a simple theoretical model to investigate the anomalous effects of opacity on spectral line ratios, as previously studied in elements such as Fe XV and Fe XVII. The model developed is general: it is not specific to a particular atomic system, thus giving applicability to a number of coronal and chromospheric plasmas; furthermore, it may be applied to a variety of astrophysically relevant geometries. The analysis is underpinned by geometrical arguments, and we outline a technique for it to be used as a tool for the explicit diagnosis of plasma geometry in distant astrophysical objects.
Resumo:
Heavy particle collisions, in particular low-energy ion-atom collisions, are amenable to semiclassical JWKB phase integral analysis in the complex plane of the internuclear separation. Analytic continuation in this plane requires due attention to the Stokes phenomenon which parametrizes the physical mechanisms of curve crossing, non-crossing, the hybrid Nikitin model, rotational coupling and predissociation. Complex transition points represent adiabatic degeneracies. In the case of two or more such points, the Stokes constants may only be completely determined by resort to the so-called comparison- equation method involving, in particular, parabolic cylinder functions or Whittaker functions and their strong-coupling asymptotics. In particular, the Nikitin model is a two transition-point one-double-pole problem in each half-plane corresponding to either ingoing or outgoing waves. When the four transition points are closely clustered, new techniques are required to determine Stokes constants. However, such investigations remain incomplete, A model problem is therefore solved exactly for scattering along a one-dimensional z-axis. The energy eigenvalue is b(2)-a(2) and the potential comprises -z(2)/2 (parabolic) and -a(2) + b(2)/2z(2) (centrifugal/centripetal) components. The square of the wavenumber has in the complex z-plane, four zeros each a transition point at z = +/-a +/- ib and has a double pole at z = 0. In cases (a) and (b), a and b are real and unitarity obtains. In case (a) the reflection and transition coefficients are parametrized by exponentials when a(2) + b(2) > 1/2. In case (b) they are parametrized by trigonometrics when a(2) + b(2) <1/2 and total reflection is achievable. In case (c) a and b are complex and in general unitarity is not achieved due to loss of flux to a continuum (O'Rourke and Crothers, 1992 Proc. R. Sec. 438 1). Nevertheless, case (c) coefficients reduce to (a) or (b) under appropriate limiting conditions. Setting z = ht, with h a real constant, an attempt is made to model a two-state collision problem modelled by a pair of coupled first-order impact parameter equations and an appropriate (T) over tilde-tau relation, where (T) over tilde is the Stueckelberg variable and tau is the reduced or scaled time. The attempt fails because (T) over tilde is an odd function of tau, which is unphysical in a real collision problem. However, it is pointed out that by applying the Kummer exponential model to each half-plane (O'Rourke and Crothers 1994 J. Phys. B: At. Mel. Opt. Phys. 27 2497) the current model is in effect extended to a collision problem with four transition points and a double pole in each half-plane. Moreover, the attempt in itself is not a complete failure since it is shown that the result is a perfect diabatic inelastic collision for a traceless Hamiltonian matrix, or at least when both diagonal elements are odd and the off-diagonal elements equal and even.