971 resultados para Reading and Interpretation of Statistical Graphs
Resumo:
The cost and risk associated with mineral exploration in Australia increases significantly as companies move into deeper regolith-covered terrain. The ability to map the bedrock and the depth of weathering within an area has the potential to decrease this risk and increase the effectiveness of exploration programs. This paper is the second in a trilogy concerning the Grant's Patch area of the Eastern Goldfields. The recent development of the VPmg potential field inversion program in conjunction with the acquisition of high-resolution gravity data over an area with extensive drilling provided an opportunity to evaluate three-dimensional gravity inversion as a bedrock and regolith mapping tool. An apparent density model of the study area was constructed, with the ground represented as adjoining 200 m by 200 m vertical rectangular prisms. During inversion VPmg incrementally adjusted the density of each prism until the free-air gravity response of the model replicated the observed data. For the Grant's Patch study area, this image of the apparent density values proved easier to interpret than the Bouguer gravity image. A regolith layer was introduced into the model and realistic fresh-rock densities assigned to each basement prism according to its interpreted lithology. With the basement and regolith densities fixed, the VPmg inversion algorithm adjusted the depth to fresh basement until the misfit between the calculated and observed gravity response was minimised. The resulting geometry of the bedrock/regolith contact largely replicated the base of weathering indicated by drilling with predicted depth of weathering values from gravity inversion typically within 15% of those logged during RAB and RC drilling.
Resumo:
Let K-k(d) denote the Cartesian product of d copies of the complete graph K-k. We prove necessary and sufficient conditions for the existence of a K-k(r)-factorization of K-pn(s), where p is prime and k > 1, n, r and s are positive integers. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Background: Tissue Doppler may be used to quantify regional left ventricular function but is limited by segmental variation of longitudinal velocity from base to apex and free to septal walls. We sought to overcome this by developing a composite of longitudinal and radial velocities. Methods and Results. We examined 82 unselected patients undergoing a standard dobutamine echocardiogram. Longitudinal velocity was obtained in the basal and mid segments of each wall using tissue Doppler in the apical views. Radial velocities were derived in the same segments using an automated border detection system and centerline method with regional chords grouped according to segment location and temporally averaged. In 25 patients at low probability of coronary disease, the pattern of regional variation in longitudinal velocity (higher in the septum) was the opposite of radial velocity (higher in the free wall) and the combination was homogenous. In 57 patients undergoing angiography, velocity in abnormal segments was less than normal segments using longitudinal (6.0 +/- 3.6 vs 9.0 +/- 2.2 cm/s, P = .01) and radial velocity (6.0 +/- 4.0 vs 8.0 +/- 3.9 cm/s, P = .02). However, the composite velocity permitted better separation of abnormal and normal segments (13.3 +/- 5.6 vs 17.5 +/- 4.2 cm/s, P = .001). There was no significant difference between the accuracy of this quantitative approach and expert visual wall motion analysis (81% vs 84%, P = .56). Conclusion: Regional variation of uni-dimensional myocardial velocities necessitates site-specific normal ranges, probably because of different fiber directions. Combined analysis of longitudinal and radial velocities allows the derivation of a composite velocity, which is homogenous in all segments and may allow better separation of normal and abnormal myocardium.
Resumo:
Aims To determine the degree of inter-institutional agreement in the assessment of dobutamine stress echocardiograms using modern stress echo cardiographic technology in combination with standardized data acquisition and assessment criteria. Method and Results Among six experienced institutions, 150 dobutamine stress echocardiograms (dobutamine up to 40 mug.kg(-1) min(-1) and atropine up to I mg) were performed on patients with suspected coronary artery disease using fundamental and harmonic imaging following a consistent digital acquisition protocol. Each dobutamine stress echocardiogram was assessed at every institution regarding endocardial visibility and left ventricular wall motion without knowledge of any other data using standardized reading criteria. No patients were excluded due to poor image quality or inadequate stress level. Coronary angiography was performed within 4 weeks. Coronary angiography demonstrated significant coronary artery disease (less than or equal to50% diameter stenosis) in 87 patients. Using harmonic imaging an average of 5.2+/-0.9 institutions agreed on dobutamine stress echocardiogram results as being normal or abnormal (mean kappa 0.55; 95% CI 0.50-0.60). Agreement was higher in patients with no (equal assessment of dobutamine stress echocardiogram results by 5.5 +/- 0.8 institutions) or three-vessel coronary artery disease (5.4 +/- 0.8 institutions) and lower in one- or two- vessel disease (5.0 +/- 0.9 and 5.2 +/- 1.0 institutions, respectively-, P=0.041). Disagreement on test results was greater in only minor wall motion abnormalities. Agreement on dobutamine stress echocardiogram results was lower using fundamental imaging (mean kappa 0.49; 95% CI 0.44-0.54; P
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente
Resumo:
Apresentação realizada no OH&S Forum 2011 - International Forum on Occupational Health and Safety: Policies, profiles and services, na Finlândia de, 20 a 22 Junho de 2011.
Resumo:
INTRODUCTION: The diagnosis of dengue and the differentiation between primary and secondary infections are important for monitoring the spread of the epidemic and identifying the risk of severe forms of the disease. The detection of immunoglobulin (Ig)M and IgG antibodies is the main technique for the laboratory diagnosis of dengue. The present study assessed the application of a rapid test for dengue concerning detection of new cases, reinfection recognition, and estimation of the epidemic attack rate. METHODS: This was a retrospective, cross-sectional, descriptive study on dengue using the Fortaleza Health Municipal Department database. The results from 1,530 tested samples, from 2005-2006, were compared with data from epidemiological studies of dengue outbreaks in 1996, 2003, and 2010. RESULTS: The rapid test confirmed 52% recent infections in the tested patients with clinical suspicion of dengue: 40% detected using IgM and 12% of new cases using IgG in the non-reactive IgM results. The positive IgM plus negative IgG (IgM+ plus IgG-) results showed that 38% of those patients had a recent primary dengue infection, while the positive IgG plus either positive or negative IgM (IgG+ plus IgM+/-) results indicated that 62% had dengue for at least a second time (recent secondary infections). This proportion of reinfections permitted us to estimate the attack rate as >62% of the population sample. CONCLUSIONS: The rapid test for dengue has enhanced our ability to detect new infections and to characterize them into primary and secondary infections, permitting the estimation of the minimal attack rate for a population during an outbreak.
Resumo:
INTRODUCTION : The tuberculin test is a diagnostic method for detecting latent tuberculosis (TB) infection, especially among disease contact cases. The objective of this study was to analyze the prevalence and evolution of Mycobacterium tuberculosis infection among TB contact cases. METHODS : A retrospective cohort study was performed in a reference center for TB. The study population consisted of 2,425 patients who underwent a tuberculin test from 2003 to 2010 and whose results indicated contact with individuals with TB. The data were collected from the registry book of the tuberculin tests, patient files and the Information System Records of Notification Grievance. To verify the evolution of TB, case records through September 2014 were consulted. Data were analyzed using the Statistical Package for the Social Sciences (SPSS). In all hypothesis tests, a significance level of 0.05 was used. RESULTS : From the studied sample, 435 (17.9%) contacts did not return for reading. Among the 1,990 contacts that completed the test, the prevalence of latent TB infection was 35.4%. Of these positive cases, 50.6% were referred to treatment; the dropout rate was 42.5%. Among all of the contacts, the TB prevalence was 1.8%, from which 13.2% abandoned treatment. CONCLUSIONS : The collected data indicate the need for more effective public policies to improve TB control, including administering tests that do not require a return visit for reading, enhancing contact tracing and encouraging actions that reinforce full treatment adherence.
Resumo:
n.s. no.29(1994)
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
The advent and application of high-resolution array-based comparative genome hybridization (array CGH) has led to the detection of large numbers of copy number variants (CNVs) in patients with developmental delay and/or multiple congenital anomalies as well as in healthy individuals. The notion that CNVs are also abundantly present in the normal population challenges the interpretation of the clinical significance of detected CNVs in patients. In this review we will illustrate a general clinical workflow based on our own experience that can be used in routine diagnostics for the interpretation of CNVs.
Resumo:
A survey of medical ambulatory practice was carried out in February-March 1981 in the two Swiss cantons of Vaud and Fribourg (total population: 700,000), in which 205 physicians participated. The methodology used was inspired from the U.S. National Ambulatory Medical Care Survey, the data collection instrument of which was adapted to our conditions; in addition, data were gathered on all referrals prescribed by 154 physicians during two weeks. (The instruments used are presented.) The potential and limits of this type of survey are discussed, as well as the representativity of the participating physicians and of the recorded visits, which are a systematic sample of over 43,000 visits.