761 resultados para reliability algorithms
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.
Resumo:
The present investigation assessed the reliability and validity of the scores of a subjective measure of desired aspirations and a behavioral measure of enacted aspirations. A sample of 5,655 employees was randomly split into two halves. Principal components analysis on Sample 1, followed by confirmatory factor analysis on Sample 2, confirmed the desired and enacted scales as distinct but related measures of managerial aspirations. The desired and enacted scales had satisfactory levels of internal consistency and temporal stability over a 1-year period. Relationships between the measures of desired and enacted managerial aspirations and both attitudinal and behavioral criteria, measured concurrently and 1 year later, provided preliminary support for convergent and discriminant validity for our sample. Desired aspirations demonstrated stronger validity than enacted aspirations. Although further examination of the psychometric properties of the scales is warranted, the present findings provide promising support for their validity and reliability for our sample.
Resumo:
Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.
The Las Campanas/AAT rich cluster survey - I. Precision and reliability of the photometric catalogue
Resumo:
The Las Campanas Observatory and Anglo-Australian Telescope Rich Cluster Survey (LARCS) is a panoramic imaging and spectroscopic survey of an X-ray luminosity-selected sample of 21 clusters of galaxies at 0.07 < z < 0.16. Charge-coupled device (CCD) imaging was obtained in B and R of typically 2 degrees wide regions centred on the 21 clusters, and the galaxy sample selected from the imaging is being used for an on-going spectroscopic survey of the clusters with the 2dF spectrograph on the Anglo-Australian Telescope. This paper presents the reduction of the imaging data and the photometric analysis used in the survey. Based on an overlapping area of 12.3 deg(2) we compare the CCD-based LARCS catalogue with the photographic-based galaxy catalogue used for the input to the 2dF Galaxy Redshift Survey (2dFGRS) from the APM, to the completeness of the GRS/APM catalogue, b(J) = 19.45. This comparison confirms the reliability of the photometry across our mosaics and between the clusters in our survey. This comparison also provides useful information concerning the properties of the GRS/APM. The stellar contamination in the GRS/APM galaxy catalogue is confirmed as around 5-10 per cent, as originally estimated. However, using the superior sensitivity and spatial resolution in the LARCS survey evidence is found for four distinct populations of galaxies that are systematically omitted from the GRS/APM catalogue. The characteristics of the 'missing' galaxy populations are described, reasons for their absence examined and the impact they will have on the conclusions drawn from the 2dF Galaxy Redshift Survey are discussed.
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
Resumo:
Objective To assess the validity and the reliability of the Portuguese version of the Delirium Rating Scale-Revised-98 (DRS-R-98). Methods The scale was translated into Portuguese and back-translated into English. After assessing its face validity, five diagnostic groups (n = 64; delirium, depression, dementia, schizophrenia and others) were evaluated by two independent researchers blinded to the diagnosis. Diagnosis and severity of delirium as measured by the DRS-R-98 were compared to clinical diagnosis, Mini-Mental State Exam, Confusion Assessment Method, and Clinical Global Impressions scale (CGI). Results Mean and rnedian DRS-R-98 total scores significantly distinguished delirium from the other groups (p < 0.001). Inter-rater reliability (ICC between 0.9 and 1) and internal consistency (alpha = 0.91) were very high. DRS-R-98 severity scores correlated highly with the CGI. Mean DRS-R-98 severity scores during delirium differed significantly (p < 0.01) from the post-treatment values. The area under the curve established by ROC analysis was 0.99 and using the cut-off Value of 20 the scale showed sensitivity and specificity of 92.6% and 94.6%, respectively. Conclusion The Portuguese version of the DRS-R-98 is a valid and reliable measure of delirium that distinguishes delirium from other disorders and is sensitive to change in delirium severity, which may be of great value for longitudinal studies. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
The technical reliability (i.e., interinstrument and interoperator reliability) of three SEAC-swept frequency bioimpedance monitors was assessed for both errors of measurement and associated analyses. In addition, intraoperator and intrainstrument variability was evaluated for repeat measures over a 4-hour period. The measured impedance values from a range of resistance-capacitance circuits were accurate to within 3% of theoretical values over a range of 50-800 ohms. Similarly, phase was measured over the range 1 degrees-19 degrees with a maximum deviation of 1.3 degrees from the theoretical value. The extrapolated impedance at zero frequency was equally well determined (+/-3%). However, the accuracy of the extrapolated value at infinite frequency was decreased, particularly at impedances below 50 ohms (approaching the lower limit of the measurement range of the instrument). The interinstrument/operator variation for whole body measurements were recorded on human volunteers with biases of less than +/-1% for measured impedance values and less than 3% for phase. The variation in the extrapolated values of impedance at zero and infinite frequencies included variations due to operator choice of the analysis parameters but was still less than +/-0.5%. (C) 1997 Wiley-Liss, Inc.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.