852 resultados para Initial data problem
Resumo:
Introduction: Although obsessions and compulsions comprise the main features of obsessive-compulsive disorder (OCD), many patients report that their compulsions are preceded by a sense of ""incompleteness"" or other unpleasant feelings such as premonitory urges or a need perform action`s until feeling ""just right."" These manifestations have been characterized as Sensory Phenomena (SP). The current study presents initial psychometric data for a new scale designed to measure SP. Methods: Seventy-six adult OCD subjects were probed twice. Patients were assessed with an open clinical interview (considered as the ""gold standard"") and with the following standardized instruments: Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Axis I Disorders, Yale-Brown Obsessive-Compulsive Scale, Dimensional Yale-Brown Obsessive-Compulsive Scale, Yale Global Tic Severity Scale, Beck Anxiety Inventory, and Beck Depression Inventory. Results: SP were present in 51 OCD patients (67.1%). Tics were present in 16 (21.1%) of the overall sample. The presence of SP was significantly higher in early-onset OCD patients. There were no significant differences in the presence of SP according to comorbidity with tics or gender. The comparison between the results from the open clinical interviews and the University of Sao Paulo Sensory Phenomena Scale (USP-SPS) showed an excellent concordance between them, with no significant differences between interviewers. The inter-rater reliability between the expert raters for the USP-SPS was high, with K=.92. The Pearson correlation coefficient between the SP severity scores given by the two raters was .89. Conclusion: Preliminary results suggest that the USP-SPS is a valid and reliable instrument for assessing the presence and severity of SP in OCD subjects. CNS Spectr. 2009;14(6):315-323
Resumo:
Background: Organs from the so-called marginal donors have been used with a significant higher risk of primary non function than organs retrieved from the optimal donors. We investigated the early metabolic changes and blood flow redistribution in splanchnic territory in an experimental model that mimics marginal brain-dead (BD) donor. Material/Methods: Ten dogs (21.3 +/- 0.9 kg), were subjected to a brain death protocol induced by subdural balloon inflation and observed for 30 min thereafter without ally additional interventions. Mean arterial and intracranial pressures, heart rate, cardiac output (CO), portal vein and hepatic artery blood flows (PVBF and HABF, ultrasonic flowprobe), and O(2)-derived variables were evaluated. Results: An increase in arterial pressure, CO, PVBF and HABF was observed after BD induction. At the end, an intense hypotension with normalization in CO (3.0 +/- 0.2 VS. 2.8 +/- 2.8 L/min) and PVBF (687 +/- 114 vs. 623 +/- 130 ml/min) was observed, whereas HABF (277 33 vs. 134 28 ml/min, p<0.005) remained lower than baseline values. Conclusions: Despite severe hypotension induced by sudden increase of intracranial pressure, the systemic and splanchnic blood flows were partially preserved without signs of severe hypoperfusion (i.e. hyperlactatemia). Additionally, the HABF was mostly negatively affected in this model of marginal BD donor. Our data suggest that not only the cardiac output, but the intrinsic hepatic microcirculatory mechanism plays a role in the hepatic blood flow control after BD.
Resumo:
Feature selection is one of important and frequently used techniques in data preprocessing. It can improve the efficiency and the effectiveness of data mining by reducing the dimensions of feature space and removing the irrelevant and redundant information. Feature selection can be viewed as a global optimization problem of finding a minimum set of M relevant features that describes the dataset as well as the original N attributes. In this paper, we apply the adaptive partitioned random search strategy into our feature selection algorithm. Under this search strategy, the partition structure and evaluation function is proposed for feature selection problem. This algorithm ensures the global optimal solution in theory and avoids complete randomness in search direction. The good property of our algorithm is shown through the theoretical analysis.
Resumo:
Statement of problem. There are no established clinical procedures for bonding zirconia to tooth structure using resin cements. Purpose. The purpose of this study was to evaluate the influence of metal primers, resin cements, and aging on bonding to zirconia. Material and methods. Zirconia was treated with commercial primers developed for bonding to metal alloys (Metaltite, Metal Primer II, Alloy Primer or Totalbond). Non-primed specimens were considered as controls. One-hundred disk-shaped specimens (19 x 4 mm) were cemented to composite resin substrates using Panavia or RelyX Unicem (n=5). Microtensile bond strength specimens were tested after 48 hours and 5 months (150 days), and failure modes were classified as type 1 (between ceramic/cement), 2 (between composite resin/cement) or 3 (mixed). Data were analyzed by 3-way ANOVA and Multiple Comparison Tukey test (alpha=.05). Results. The interactions primer/luting system (P=.016) and luting system/storage time (P=.004) were statistically significant. The use of Alloy Primer significantly improved the bond strength of RelyX Unicem (P<.001), while for Panavia, none of the primers increased the bond strength compared to the control group. At 48 hours, Panavia had statistically higher bond strength (P=.004) than Unicem (13.9 +/- 4.4MPa and 10.2 +/- 6.6MPa, respectively). However, both luting systems presented decreasing, statistically similar; values after aging (Panavia: 3.6 +/- 2.2MPa; Unicem: 6.1 +/- 5.3MPa). At 48 hours, Alloy Primer/Unicem had the lowest incidence of type 1 failure (8%). After aging, all the groups showed a predominance of type 1 failures. Conclusions. The use of Alloy Primer improved bond strength between RelyX Unicem and zirconia. Though the initial values obtained with Panavia were significantly higher than RelyX Unicem, after aging, both luting agents presented statistically similar performances. (J Prosthet Dent 2011;105:296-303)
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
We present the first mathematical model on the transmission dynamics of Schistosoma japonicum. The work extends Barbour's classic model of schistosome transmission. It allows for the mammalian host heterogeneity characteristic of the S. japonicum life cycle, and solves the problem of under-specification of Barbour's model by the use of Chinese data we are collecting on human-bovine transmission in the Poyang Lake area of Jiangxi Province in China. The model predicts that in the lake/marshland areas of the Yangtze River basin: (1) once-early mass chemotherapy of humans is little better than twice-yearly mass chemotherapy in reducing human prevalence. Depending on the heterogeneity of prevalence within the population, targeted treatment of high prevalence groups, with lower overall coverage, can be more effective than mass treatment with higher overall coverage. Treatment confers a short term benefit only, with prevalence rising to endemic levels once chemotherapy programs are stopped (2) depending on the relative contributions of bovines and humans, bovine treatment can benefit humans almost as much as human treatment. Like human treatment, bovine treatment confers a short-term benefit. A combination of human and bovine treatment will dramatically reduce human prevalence and maintains the reduction for a longer period of time than treatment of a single host, although human prevalence rises once treatment ceases; (3) assuming 75% coverage of bovines, a bovine vaccine which acts on worm fecundity must have about 75% efficacy to reduce the reproduction rate below one and ensure mid-term reduction and long-term elimination of the parasite. Such a vaccination program should be accompanied by an initial period of human treatment to instigate a short-term reduction in prevalence, following which the reduction is enhanced by vaccine effects; (4) if the bovine vaccine is only 45% efficacious (the level of current prototype vaccines) it will lower the endemic prevalence, but will not result in elimination. If it is accompanied by an initial period of human treatment and by a 45% improvement in human sanitation or a 30% reduction in contaminated water contact by humans, elimination is then possible. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
Financial literature and financial industry use often zero coupon yield curves as input for testing hypotheses, pricing assets or managing risk. They assume this provided data as accurate. We analyse implications of the methodology and of the sample selection criteria used to estimate the zero coupon bond yield term structure on the resulting volatility of spot rates with different maturities. We obtain the volatility term structure using historical volatilities and Egarch volatilities. As input for these volatilities we consider our own spot rates estimation from GovPX bond data and three popular interest rates data sets: from the Federal Reserve Board, from the US Department of the Treasury (H15), and from Bloomberg. We find strong evidence that the resulting zero coupon bond yield volatility estimates as well as the correlation coefficients among spot and forward rates depend significantly on the data set. We observe relevant differences in economic terms when volatilities are used to price derivatives.
Resumo:
In this paper we present a user-centered interface for a scheduling system. The purpose of this interface is to provide graphical and interactive ways of defining a scheduling problem. To create such user interface an evaluation-centered user interaction development method was adopted: the star life cycle. The created prototype comprises the Task Module and the Scheduling Problem Module. The first one allows users to define a sequence of operations, i.e., a task. The second one enables a scheduling problem definition, which consists in a set of tasks. Both modules are equipped with a set of real time validations to assure the correct definition of the necessary data input for the scheduling module of the system. The usability evaluation allowed us to measure the ease of interaction and observe the different forms of interaction provided by each participant, namely the reactions to the real time validation mechanism.
Resumo:
In recent years, Power Systems (PS) have experimented many changes in their operation. The introduction of new players managing Distributed Generation (DG) units, and the existence of new Demand Response (DR) programs make the control of the system a more complex problem and allow a more flexible management. An intelligent resource management in the context of smart grids is of huge important so that smart grids functions are assured. This paper proposes a new methodology to support system operators and/or Virtual Power Players (VPPs) to determine effective and efficient DR programs that can be put into practice. This method is based on the use of data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 32 bus distribution network.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção do grau de Mestre em Ciências da Educação - Especialidade Supervisão em Educação