965 resultados para Error in substance
Resumo:
En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.
Resumo:
Prevention scientists have called for more research on the factors affecting the implementation of substance use prevention programs. Given the lack of literature in this area, coupled with evidence that children as early as elementary school engage in substance use, the purpose of this study was to identify the factors that influence the implementation of substance use prevention programs in elementary schools. This study involved a mixed methods approach comprised of a survey and in-person interviews. Sixty-five guidance counselors and teachers completed the survey, and 9 guidance counselors who completed the survey were interviewed individually. Correlation analyses and hierarchical multiple regression were conducted. Quantitative findings revealed ease of implementation most frequently influenced program implementation, followed by beliefs about the program's effectiveness. Qualitative findings showed curriculum modification as an important theme, as well as difficulty of program implementation. The in-person interviews also shed light on three interrelated themes influencing program implementation – The Wheel, time, and scheduling. Results indicate the majority of program providers modified the curriculum in some way. Implications for research, policy, and practice are discussed, and areas for future research are suggested.^
Resumo:
HIV-positive individuals engage in substance use at higher rates than the general population and are more likely to also suffer from concurrent psychiatric disorders and substance use disorders. Despite this, little is known about the unique clinical concerns of HIV-positive individuals entering substance use treatment. This study examined the clinical characteristics of clients (N=1712) entering residential substance use treatment as a function of self-reported HIV status (8.65% HIV-positive). Results showed higher levels of concurrent substance use and psychiatric disorders for HIV-positive individuals, who were also significantly more likely to meet criteria for bipolar disorder and borderline personality disorder. Past diagnoses of depression, posttraumatic stress disorder, and social phobia were also significantly more common. Study findings indicate a need to provide more intensive care for HIV-positive individuals, including resources targeted at concurrent psychiatric problems, to ensure positive treatment outcomes following residential substance use treatment discharge.
Resumo:
Prevention scientists have called for more research on the factors affecting the implementation of substance use prevention programs. Given the lack of literature in this area, coupled with evidence that children as early as elementary school engage in substance use, the purpose of this study was to identify the factors that influence the implementation of substance use prevention programs in elementary schools. This study involved a mixed methods approach comprised of a survey and in-person interviews. Sixty-five guidance counselors and teachers completed the survey, and 9 guidance counselors who completed the survey were interviewed individually. Correlation analyses and hierarchical multiple regression were conducted. Quantitative findings revealed ease of implementation most frequently influenced program implementation, followed by beliefs about the program’s effectiveness. Qualitative findings showed curriculum modification as an important theme, as well as difficulty of program implementation. The in-person interviews also shed light on three interrelated themes influencing program implementation – The Wheel, time, and scheduling. Results indicate the majority of program providers modified the curriculum in some way. Implications for research, policy, and practice are discussed, and areas for future research are suggested.
Resumo:
Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
Artesian confined aquifers do not need pumping energy, and water from the aquifer flows naturally at the wellhead. This study proposes correcting the method for analyzing flowing well tests presented by Jacob and Lohman (1952) by considering the head losses due to friction in the well casing. The application of the proposed correction allowed the determination of a transmissivity (T = 411 m(2)/d) and storage coefficient (S = 3 x 10(-4)) which appear to be representative for the confined Guarani Aquifer in the study area. Ignoring the correction due to head losses in the well casing, the error in transmissivity evaluation is about 18%. For the storage coefficient the error is of 5 orders of magnitude, resulting in physically unacceptable value. The effect of the proposed correction on the calculated radius of the cone of depression and corresponding well interference is also discussed.
Resumo:
Recent progress in the production, purification, and experimental and theoretical investigations of carbon nanotubes for hydrogen storage are reviewed. From the industrial point of view, the chemical vapor deposition process has shown advantages over laser ablation and electric-arc-discharge methods. The ultimate goal in nanotube synthesis should be to gain control over geometrical aspects of nanotubes, such as location and orientation, and the atomic structure of nanotubes, including helicity and diameter. There is currently no effective and simple purification procedure that fulfills all requirements for processing carbon nanotubes. Purification is still the bottleneck for technical applications, especially where large amounts of material are required. Although the alkali-metal-doped carbon nanotubes showed high H-2 Weight uptake, further investigations indicated that some of this uptake was due to water rather than hydrogen. This discovery indicates a potential source of error in evaluation of the storage capacity of doped carbon nanotubes. Nevertheless, currently available single-wall nanotubes yield a hydrogen uptake value near 4 wt% under moderate pressure and room temperature. A further 50% increase is needed to meet U.S. Department of Energy targets for commercial exploitation. Meeting this target will require combining experimental and theoretical efforts to achieve a full understanding of the adsorption process, so that the uptake can be rationally optimized to commercially attractive levels. Large-scale production and purification of carbon nanotubes and remarkable improvement of H-2 storage capacity in carbon nanotubes represent significant technological and theoretical challenges in the years to come.
Resumo:
An approach based on a linear rate of increase in harvest index (141) with time after anthesis has been used as a simple means-to predict grain growth and yield in many crop simulation models. When applied to diverse situations, however, this approach has been found to introduce significant error in grain yield predictions. Accordingly, this study was undertaken to examine the stability of the HI approach for yield prediction in sorghum [Sorghum bicolor (L.) Moench]. Four field experiments were conducted under nonlimiting water. and N conditions. The experiments were sown at times that ensured a broad range in temperature and radiation conditions. Treatments consisted of two population densities and three genotypes varying in maturity. Frequent sequential harvests were used to monitor crop growth, yield, and the dynamics of 111. Experiments varied greatly in yield and final HI. There was also a tendency for lower HI with later maturity. Harvest index dynamics also varied among experiments and, to a lesser extent, among treatments within experiments. The variation was associated mostly with the linear rate of increase in HI and timing of cessation of that increase. The average rate of HI increase was 0.0198 d(-1), but this was reduced considerably (0.0147) in one experiment that matured in cool conditions. The variations found in IN dynamics could be largely explained by differences in assimilation during grain filling and remobilization of preanthesis assimilate. We concluded that this level of variation in HI dynamics limited the general applicability of the HI approach in yield prediction and suggested a potential alternative for testing.
Resumo:
Sensor/actuator networks promised to extend automated monitoring and control into industrial processes. Avionic system is one of the prominent technologies that can highly gain from dense sensor/actuator deployments. An aircraft with smart sensing skin would fulfill the vision of affordability and environmental friendliness properties by reducing the fuel consumption. Achieving these properties is possible by providing an approximate representation of the air flow across the body of the aircraft and suppressing the detected aerodynamic drags. To the best of our knowledge, getting an accurate representation of the physical entity is one of the most significant challenges that still exists with dense sensor/actuator network. This paper offers an efficient way to acquire sensor readings from very large sensor/actuator network that are located in a small area (dense network). It presents LIA algorithm, a Linear Interpolation Algorithm that provides two important contributions. First, it demonstrates the effectiveness of employing a transformation matrix to mimic the environmental behavior. Second, it renders a smart solution for updating the previously defined matrix through a procedure called learning phase. Simulation results reveal that the average relative error in LIA algorithm can be reduced by as much as 60% by exploiting transformation matrix.
Resumo:
Undesirable void formation during the injection phase of the liquid composite molding process can be understood as a consequence of the non-uniformity of the flow front progression, caused by the dual porosity of the fiber perform. Therefore the best examination of the void formation physics can be provided by a mesolevel analysis, where the characteristic dimension is given by the fiber tow diameter. In mesolevel analysis, liquid impregnation along two different scales; inside fiber tows and within the spaces between them; must be considered and the coupling between these flow regimes must be addressed. In such case, it is extremely important to account correctly for the surface tension effects, which can be modeled as capillary pressure applied at the flow front. When continues Galerkin method is used, exploiting elements with velocity components and pressure as nodal variables, strong numerical implementation of such boundary conditions leads to ill-posing of the problem, in terms of the weak classical as well as stabilized formulation. As a consequence, there is an error in mass conservation accumulated especially along the free flow front. This article presents a numerical procedure, which was formulated and implemented in the existing Free Boundary Program in order to significantly reduce this error.
Resumo:
OBJECTIVE: To access the incidence of diagnostic errors in the initial evaluation of children with cardiac murmurs. METHODS: We evaluated our 7-years of experience in a public pediatric cardiology outpatient clinic. Of 3692 patients who were sent to the hospital, 2603 presented with a heart murmur and were investigated. Patients for whom a disagreement existed between the initial and final diagnoses were divided into the following 2 groups: G1 (n=17) with an initial diagnosis of an innocent murmur and a final diagnosis of cardiopathy, and G2 (n=161) with an initial diagnosis of cardiopathy and a final diagnosis of a normal heart. RESULTS: In G1, the great majority of patients had cardiac defects with mild hemodynamic repercussions, such as small ventricular septal defect and mild pulmonary stenosis. In G2, the great majority of structural defects were interventricular communication, atrial septal defect and pulmonary valve stenosis. CONCLUSION: A global analysis demonstrated that diagnostic error in the initial evaluation of children with cardiac murmurs is real, reaching approximately 6% of cases. The majority of these misdiagnoses were in patients with an initial diagnosis of cardiopathy, which was not confirmed through later complementary examinations. Clinical cardiovascular examination is an excellent resource in the evaluation of children suspected of having cardiopathy. Immediate outpatient discharge of children with an initial diagnosis of an innocent heart murmur seems to be a suitable approach.
Resumo:
Background: Hereditary haemochromatosis is a heritable disorder caused by an inborn error in the metabolism of iron. It results in over absorption of iron by the body, which can manifest clinically as fatigue, arthritis, diabetes and cardiovascular problems. The highest prevalence for the genetic mutations that cause hereditary haemochromatosis can be found in the Irish population. Individuals with diabetes may also have haemochromatosis (and vice versa), due to the bi-directional relationship between iron metabolism and glucose metabolism. Objectives: To determine the incidence of the three haemochromatosis mutations C282Y, H63D & S65C, in a population from the North West of Ireland and to investigate whether there is an increased frequency of these three mutations in a diabetic population from the same region. Method: DNA was extracted from 500 whole blood samples (250 diabetic samples and 250 ‘control’ samples) using a Wizard™ kit. PCR was conducted utilising specific primers for each mutation and in accordance with a set protocol. Following amplification, PCR product was subjected to restriction endonuclease digestion, where different restriction enzymes (Rsa I, Nde II & Hinf I) were employed to determine the HFE genotype status of samples. Results: The incidence of C282Y homozygosity (1/83) and C282Y heterozygosity (1/6) in the ‘control’ group was similar to those reported for the general Irish population (1/83 and 1/5, respectively). Incidences of H63D homozygotes and H63D heterozygotes or ‘carriers’ in the diabetic population were greater than that of the ‘control’ population. A significant finding of this study was that of an incidence of 1/32 S65C carriers in the control population. This is, to our knowledge, the highest incidence of the genotype reported to date in the general Irish population. Statistical analysis showed that there was no significant differences between the HFE genotype frequencies in the Diabetic and Control Populations. Conclusion: Results of the study concord with published literature in terms of C282Y homozygosity and C282Y heterozygosity in the general Irish population. An increased frequency of the H63D mutation in diabetic individuals was also found but was not statistically significant. The biochemical effect of the H63D mutation is still unknown. The significance of such a high incidence of S65C carriers in the ‘control’ population warrants further investigation.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.