973 resultados para Error in substance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Available studies vary in their estimated prevalence of attention deficit/hyperactivity disor-der (ADHD) in substance use disorder (SUD) patients, ranging from 2 to 83%. A better understanding ofthe possible reasons for this variability and the effect of the change from DSM-IV to DSM-5 is needed.Methods: A two stage international multi-center, cross-sectional study in 10 countries, among patientsform inpatient and outpatient addiction treatment centers for alcohol and/or drug use disorder patients. Atotal of 3558 treatment seeking SUD patients were screened for adult ADHD. A subsample of 1276 subjects,both screen positive and screen negative patients, participated in a structured diagnostic interview. 5AdultsResults: Prevalence of DSM-IV and DSM-5 adult ADHD varied for DSM-IV from 5.4% (CI 95%: 2.4–8.3) forHungary to 31.3% (CI 95%:25.2–37.5) for Norway and for DSM-5 from 7.6% (CI 95%: 4.1–11.1) for Hungary to32.6% (CI 95%: 26.4–38.8) for Norway. Using the same assessment procedures in all countries and centersresulted in substantial reduction of the variability in the prevalence of adult ADHD reported in previousstudies among SUD patients (2–83% → 5.4–31.3%). The remaining variability was partly explained byprimary substance of abuse and by country (Nordic versus non-Nordic countries). Prevalence estimatesfor DSM-5 were slightly higher than for DSM-IV.Conclusions: Given the generally high prevalence of adult ADHD, all treatment seeking SUD patientsshould be screened and, after a confirmed diagnosis, treated for ADHD since the literature indicates poorprognoses of SUD in treatment seeking SUD patients with ADHD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims To determine comorbidity patterns in treatment-seeking substance use disorder (SUD) patients with and without adult attention deficit hyperactivity disorder (ADHD), with an emphasis on subgroups defined by ADHD subtype, taking into account differences related to gender and primary substance of abuse. Design Data were obtained from the cross-sectional International ADHD in Substance use disorder Prevalence (IASP) study. Setting Forty-seven centres of SUD treatment in 10 countries. Participants A total of 1205 treatment-seeking SUD patients. Measurements Structured diagnostic assessments were used for all disorders: presence of ADHD was assessed with the Conners' Adult ADHD Diagnostic Interview for DSM-IV (CAADID), the presence of antisocial personality disorder (ASPD), major depression (MD) and (hypo)manic episode (HME) was assessed with the Mini International Neuropsychiatric Interview-Plus (MINI Plus), and the presence of borderline personality disorder (BPD) was assessed with the Structured Clinical Interview for DSM-IV Axis II (SCID II). Findings The prevalence of DSM-IV adult ADHD in this SUD sample was 13.9%. ASPD [odds ratio (OR) = 2.8, 95% confidence interval (CI) = 1.8–4.2], BPD (OR = 7.0, 95% CI = 3.1–15.6 for alcohol; OR = 3.4, 95% CI = 1.8–6.4 for drugs), MD in patients with alcohol as primary substance of abuse (OR = 4.1, 95% CI = 2.1–7.8) and HME (OR = 4.3, 95% CI = 2.1–8.7) were all more prevalent in ADHD+ compared with ADHD− patients (P < 0.001). These results also indicate increased levels of BPD and MD for alcohol compared with drugs as primary substance of abuse. Comorbidity patterns differed between ADHD subtypes with increased MD in the inattentive and combined subtype (P < 0.01), increased HME and ASPD in the hyperactive/impulsive (P < 0.01) and combined subtypes (P < 0.001) and increased BPD in all subtypes (P < 0.001) compared with SUD patients without ADHD. Seventy-five per cent of ADHD patients had at least one additional comorbid disorder compared with 37% of SUD patients without ADHD. Conclusions Treatment-seeking substance use disorder patients with attention deficit hyperactivity disorder are at a very high risk for additional externalizing disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Over 39.9% of the adult population forty or older in the United States has refractive error, little is known about the etiology of this condition and associated risk factors and their entailed mechanism due to the paucity of data regarding the changes of refractive error for the adult population over time.^ Aim. To evaluate risk factors over a long term, 5-year period, in refractive error changes among persons 43 or older by testing the hypothesis that age, gender, systemic diseases, nuclear sclerosis and baseline refractive errors are all significantly associated with refractive errors changes in patients at a Dallas, Texas private optometric office.^ Methods. A retrospective chart review of subjective refraction, eye health, and self-report health history was done on patients at a private optometric office who were 43 or older in 2000 who had eye examinations both in 2000 and 2005. Aphakic and pseudophakic eyes were excluded as well as eyes with best corrected Snellen visual acuity of 20/40 and worse. After exclusions, refraction was obtained on 114 right eyes and 114 left eyes. Spherical equivalent (sum of sphere + ½ cylinder) was used as the measure of refractive error.^ Results. Similar changes in refractive error were observed for the two eyes. The 5-year change in spherical power was in a hyperopic direction for younger age groups and in a myopic direction for older subjects, P<0.0001. The gender-adjusted mean change in refractive error in right eyes of persons aged 43 to 54, 55 to 64, 65 to 74, and 75 or older at baseline was +0.43D, +0.46 D, -0.09 D, and -0.23D, respectively. Refractive change was strongly related to baseline nuclear cataract severity; grades 4 to 5 were associated with a myopic shift (-0.38 D, P< 0.0001). The mean age-adjusted change in refraction was +0.27 D for hyperopic eyes, +0.56 D for emmetropic eyes, and +0.26 D for myopic eyes.^ Conclusions. This report has documented refractive error changes in an older population and confirmed reported trends of a hyperopic shift before age 65 and a myopic shift thereafter associated with the development of nuclear cataract.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prevention scientists have called for more research on the factors affecting the implementation of substance use prevention programs. Given the lack of literature in this area, coupled with evidence that children as early as elementary school engage in substance use, the purpose of this study was to identify the factors that influence the implementation of substance use prevention programs in elementary schools. This study involved a mixed methods approach comprised of a survey and in-person interviews. Sixty-five guidance counselors and teachers completed the survey, and 9 guidance counselors who completed the survey were interviewed individually. Correlation analyses and hierarchical multiple regression were conducted. Quantitative findings revealed ease of implementation most frequently influenced program implementation, followed by beliefs about the program's effectiveness. Qualitative findings showed curriculum modification as an important theme, as well as difficulty of program implementation. The in-person interviews also shed light on three interrelated themes influencing program implementation – The Wheel, time, and scheduling. Results indicate the majority of program providers modified the curriculum in some way. Implications for research, policy, and practice are discussed, and areas for future research are suggested.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HIV-positive individuals engage in substance use at higher rates than the general population and are more likely to also suffer from concurrent psychiatric disorders and substance use disorders. Despite this, little is known about the unique clinical concerns of HIV-positive individuals entering substance use treatment. This study examined the clinical characteristics of clients (N=1712) entering residential substance use treatment as a function of self-reported HIV status (8.65% HIV-positive). Results showed higher levels of concurrent substance use and psychiatric disorders for HIV-positive individuals, who were also significantly more likely to meet criteria for bipolar disorder and borderline personality disorder. Past diagnoses of depression, posttraumatic stress disorder, and social phobia were also significantly more common. Study findings indicate a need to provide more intensive care for HIV-positive individuals, including resources targeted at concurrent psychiatric problems, to ensure positive treatment outcomes following residential substance use treatment discharge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prevention scientists have called for more research on the factors affecting the implementation of substance use prevention programs. Given the lack of literature in this area, coupled with evidence that children as early as elementary school engage in substance use, the purpose of this study was to identify the factors that influence the implementation of substance use prevention programs in elementary schools. This study involved a mixed methods approach comprised of a survey and in-person interviews. Sixty-five guidance counselors and teachers completed the survey, and 9 guidance counselors who completed the survey were interviewed individually. Correlation analyses and hierarchical multiple regression were conducted. Quantitative findings revealed ease of implementation most frequently influenced program implementation, followed by beliefs about the program’s effectiveness. Qualitative findings showed curriculum modification as an important theme, as well as difficulty of program implementation. The in-person interviews also shed light on three interrelated themes influencing program implementation – The Wheel, time, and scheduling. Results indicate the majority of program providers modified the curriculum in some way. Implications for research, policy, and practice are discussed, and areas for future research are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the use of bid information, including both price and non-price factors in predicting the bidder’s performance. Design/methodology/approach – The practice of the industry was first reviewed. Data on bid evaluation and performance records of the successful bids were then obtained from the Hong Kong Housing Department, the largest housing provider in Hong Kong. This was followed by the development of a radial basis function (RBF) neural network based performance prediction model. Findings – It is found that public clients are more conscientious and include non-price factors in their bid evaluation equations. With the input variables used the information is available at the time of the bid and the output variable is the project performance score recorded during work in progress achieved by the successful bidder. It was found that past project performance score is the most sensitive input variable in predicting future performance. Research limitations/implications – The paper shows the inadequacy of using price alone for bid award criterion. The need for a systemic performance evaluation is also highlighted, as this information is highly instrumental for subsequent bid evaluations. The caveat for this study is that the prediction model was developed based on data obtained from one single source. Originality/value – The value of the paper is in the use of an RBF neural network as the prediction tool because it can model non-linear function. This capability avoids tedious ‘‘trial and error’’ in deciding the number of hidden layers to be used in the network model. Keywords Hong Kong, Construction industry, Neural nets, Modelling, Bid offer spreads Paper type Research paper

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Some types of antimicrobial-coated central venous catheters (A-CVC) have been shown to be cost-effective in preventing catheter-related bloodstream infection (CR-BSI). However, not all types have been evaluated, and there are concerns over the quality and usefulness of these earlier studies. There is uncertainty amongst clinicians over which, if any, antimicrobial-coated central venous catheters to use. We re-evaluated the cost-effectiveness of all commercially available antimicrobialcoated central venous catheters for prevention of catheter-related bloodstream infection in adult intensive care unit (ICU) patients. Methods: We used a Markov decision model to compare the cost-effectiveness of antimicrobial-coated central venous catheters relative to uncoated catheters. Four catheter types were evaluated; minocycline and rifampicin (MR)-coated catheters; silver, platinum and carbon (SPC)-impregnated catheters; and two chlorhexidine and silver sulfadiazine-coated catheters, one coated on the external surface (CH/SSD (ext)) and the other coated on both surfaces (CH/SSD (int/ext)). The incremental cost per qualityadjusted life-year gained and the expected net monetary benefits were estimated for each. Uncertainty arising from data estimates, data quality and heterogeneity was explored in sensitivity analyses. Results: The baseline analysis, with no consideration of uncertainty, indicated all four types of antimicrobial-coated central venous catheters were cost-saving relative to uncoated catheters. Minocycline and rifampicin-coated catheters prevented 15 infections per 1,000 catheters and generated the greatest health benefits, 1.6 quality-adjusted life-years, and cost-savings, AUD $130,289. After considering uncertainty in the current evidence, the minocycline and rifampicin-coated catheters returned the highest incremental monetary net benefits of $948 per catheter; but there was a 62% probability of error in this conclusion. Although the minocycline and rifampicin-coated catheters had the highest monetary net benefits across multiple scenarios, the decision was always associated with high uncertainty. Conclusions: Current evidence suggests that the cost-effectiveness of using antimicrobial-coated central venous catheters within the ICU is highly uncertain. Policies to prevent catheter-related bloodstream infection amongst ICU patients should consider the cost-effectiveness of competing interventions in the light of this uncertainty. Decision makers would do well to consider the current gaps in knowledge and the complexity of producing good quality evidence in this area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To examine the sources of coding discrepancy for injury morbidity data and explore the implications of these sources for injury surveillance.-------- Method: An on-site medical record review and recoding study was conducted for 4373 injury-related hospital admissions across Australia. Codes from the original dataset were compared to the recoded data to explore the reliability of coded data aand sources of discrepancy.---------- Results: The most common reason for differences in coding overall was assigning the case to a different external cause category with 8.5% assigned to a different category. Differences in the specificity of codes assigned within a category accounted for 7.8% of coder difference. Differences in intent assignment accounted for 3.7% of the differences in code assignment.---------- Conclusions: In the situation where 8 percent of cases are misclassified by major category, the setting of injury targets on the basis of extent of burden is a somewhat blunt instrument Monitoring the effect of prevention programs aimed at reducing risk factors is not possible in datasets with this level of misclassification error in injury cause subcategories. Future research is needed to build the evidence base around the quality and utility of the ICD classification system and application of use of this for injury surveillance in the hospital environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gay community media functions as a system with three nodes, in which the flows of information and capital theoretically benefit all parties: the gay community gains a sense of cohesion and citizenship through media; the gay media outlets profit from advertisers’ capital; and advertisers recoup their investments in lucrative ‘pink dollar’ revenue. But if a necessary corollary of all communication systems is error or noise, where—and what—are the errors in this system? In this paper we argue that the ‘errorin the gay media system is Queerness, and that the gay media system ejects (in a process of Kristevan abjection) these Queer identities in order to function successfully. We examine the ways in which Queer identities are excluded from representation in such media through a discourse and content analysis of The Sydney Star Observer (Australia’s largest gay and lesbian paper). First, we analyse the way Queer bodies are excluded from the discourses that construct and reinforce both the ideal gay male body and the notions of homosexual essence required for that body to be meaningful. We then argue that abject Queerness returns in the SSO’s discourses of public health through the conspicuous absence of the AIDS-inflicted body (which we read as the epitome of the abject Queer), since this absence paradoxically conjures up a trace of that which the system tries to expel. We conclude by arguing that because the ‘Queer error’ is integral to the SSO, gay community media should practise a politics of Queer inclusion rather than exclusion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to develop a reliable technique for measuring the area of a curved surface from an axial computed tomography (CT) scan and to apply this clinically in the measurement of articular cartilage surface area in acetabular fractures. The method used was a triangulation algorithm. In order to determine the accuracy of the technique, areas of hemispheres of known size were measured to give the percentage error in area measurement. Seven such hemispheres were machined into a Perspex block and their area measured geometrically, and also from CT scans by means of the triangulation algorithm. Scans of 1, 2 and 4 mm slice thickness and separation were used. The error varied with slice thickness and hemisphere diameter. It was shown that the 2 mm slice thickness provides the most accurate area measurement, while 1 mm cuts overestimate and 4 mm cuts underestimate the area. For a hemisphere diameter of 5 cm, which is of similar size to the acetabulum, the error was -11.2% for 4 mm cuts, +4.2% for 2 mm cuts and + 5.1% for 1 mm cuts. As expected, area measurement was more accurate for larger hemispheres. This method can be applied clinically to quantify acetabular fractures by measuring the percentage area of intact articular cartilage. In the case of both column fractures, the percentage area of secondary congruence can be determined. This technique of quantifying acetabular fractures has a potential clinical application as a prognostic factor and an indication for surgery in the long term.