908 resultados para Error of measurement
Resumo:
The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.
Resumo:
Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.
Resumo:
Das aSPECT Spektrometer wurde entworfen, um das Spektrum der Protonen beimrnZerfall freier Neutronen mit hoher Präzision zu messen. Aus diesem Spektrum kann dann der Elektron-Antineutrino Winkelkorrelationskoeffizient "a" mit hoher Genauigkeit bestimmt werden. Das Ziel dieses Experiments ist es, diesen Koeffizienten mit einem absoluten relativen Fehler von weniger als 0.3% zu ermitteln, d.h. deutlich unter dem aktuellen Literaturwert von 5%.rnrnErste Messungen mit dem aSPECT Spektrometer wurden an der Forschungsneutronenquelle Heinz Maier-Leibnitz in München durchgeführt. Jedoch verhinderten zeitabhängige Instabilitäten des Meßhintergrunds eine neue Bestimmung von "a".rnrnDie vorliegende Arbeit basiert hingegen auf den letzten Messungen mit dem aSPECTrnSpektrometer am Institut Laue-Langevin (ILL) in Grenoble, Frankreich. Bei diesen Messungen konnten die Instabilitäten des Meßhintergrunds bereits deutlich reduziert werden. Weiterhin wurden verschiedene Veränderungen vorgenommen, um systematische Fehler zu minimieren und um einen zuverlässigeren Betrieb des Experiments sicherzustellen. Leider konnte aber wegen zu hohen Sättigungseffekten der Empfängerelektronik kein brauchbares Ergebnis gemessen werden. Trotzdem konnten diese und weitere systematische Fehler identifiziert und verringert, bzw. sogar teilweise eliminiert werden, wovon zukünftigernStrahlzeiten an aSPECT profitieren werden.rnrnDer wesentliche Teil der vorliegenden Arbeit befasst sich mit der Analyse und Verbesserung der systematischen Fehler, die durch das elektromagnetische Feld aSPECTs hervorgerufen werden. Hieraus ergaben sich vielerlei Verbesserungen, insbesondere konnten die systematischen Fehler durch das elektrische Feld verringert werden. Die durch das Magnetfeld verursachten Fehler konnten sogar soweit minimiert werden, dass nun eine Verbesserung des aktuellen Literaturwerts von "a" möglich ist. Darüber hinaus wurde in dieser Arbeit ein für den Versuch maßgeschneidertes NMR-Magnetometer entwickelt und soweit verbessert, dass nun Unsicherheiten bei der Charakterisierung des Magnetfeldes soweit reduziert wurden, dass sie für die Bestimmung von "a" mit einer Genauigkeit von mindestens 0.3% vernachlässigbar sind.
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.
Resumo:
Metallische Nanopartikel und ihre Oxide (z.B. ZnO NP, TiO2 NP und Fe2O3 NP) werden aufgrund ihrer chemischen und physikalischen Eigenschaften häufig als Additive in der Reifenproduktion, in Katalysatoren, Lebensmitteln, Arzneimitteln und Kosmetikprodukten verwendet. Künftig wird ein kontinuierlicher Anstieg der industriellen Anwendung (~ 1663 Tonnen im Jahr 2025) mit gesteigerter Freisetzung in die Umwelt erwartet, was zwangsläufig zu einer vermehrten Aufnahme über das respiratorische Epithel führt. Metalldampffieber ist als gesundheitsschädigender Effekt von Metalloxid-haltigen Aerosolen (z.B. ZnO) nach Inhalation bekannt. Immunreaktionen, wie beispielsweise Entzündungen, werden häufig mit der Entstehung von Sauerstoffradikalen (ROS) in Verbindung gebracht, die wiederum zu DNA-Schäden führen können. Drei mögliche Ursachen der Genotoxität werden angenommen: direkte Interaktion von Nanopartikeln mit intrazellulären Strukturen, Interaktion von Ionen dissoziierter Partikel mit intrazellulären Strukturen sowie die Entstehung von ROS initiiert durch Partikel oder Ionen.rnDie vorliegende Studie befasst sich mit den Mechanismen der Genotoxizität von ZnO Nanopartikeln (ZnO NP), als Beispiel für metallische Nanopartikel, im respiratorischen Epithel. In der Studie wurde gezielt die intrazelluläre Aufnahme und Verteilung von ZnO NP, deren Toxizität, deren DNA schädigendes Potential sowie die Aktivierung der DNA damage response (DDR) analysiert.rnEs konnten kaum internalisierte ZnO NP mittels TEM detektiert werden. Innerhalb der ersten Sekunden nach Behandlung mit ZnO NP wurde spektrofluorometrisch ein starker Anstieg der intrazellulären Zn2+ Konzentration gemessen. In unbehandelten Zellen war Zn2+ in granulären Strukturen lokalisiert. Die Behandlung mit ZnO NP führte zu einer Akkumulation von Zn2+ in diesen Strukturen. Im zeitlichen Verlauf verlagerten sich die Zn2+-Ionen in das Zytoplasma, sowie in Zellkerne und Mitochondrien. Es wurde keine Kolokalisation von Zn2+ mit den frühen Endosomen und dem endoplasmatischen Retikulum beobachtet. Die Vorbehandlung der Zellen mit Diethylen-triaminpentaessigsäure (DTPA), als extrazellulärem Komplexbildner, verhinderte den intrazellulären Anstieg von Zn2+ nach Behandlung mit den Partikeln.rnDie Behandlung mit ZnO NP resultierte in einer zeit- und dosisabhängigen Reduktion der zellulären Viabilität, während die intrazelluläre ROS-Konzentrationen in den ersten 30 min leicht und anschließend kontinuierlich bis zum Ende der Messung anstiegen. Außerdem verringerte sich das mitochondriale Membranpotential, während sich die Anzahl der frühapoptotischen Zellen in einer zeitabhängigen Weise erhöhte. rnDNA Doppelstrangbrüche (DNA DSB) wurden mittels Immunfluoreszenz-Färbung der γH2A.X foci sichtbar gemacht und konnten nach Behandlung mit ZnO NP detektiert werden. Die Vorbehandlung mit dem Radikalfänger N-Acetyl-L-Cytein (NAC) resultierte in stark reduzierten intrazellulären ROS-Konzentrationen sowie wenigen DNA DSB. Die DNA Schädigung wurde durch Vorbehandlung mit DTPA ganz verhindert.rnDie Aktivierung der DDR wurde durch die Analyse von ATM, ATR, Chk1, Chk2, p53 und p21 mittels Western Blot und ELISA nach Behandlung mit ZnO NP überprüft. Der ATR/Chk1 Signalweg wurde durch ZnO NP nicht aktiviert. Die Komplexierung von Zn2+ resultierte in einer verminderten ATM/Chk2 Signalwegaktivierung. Es zeigte sich, dass das Abfangen von ROS keinen Effekt auf die ATM/Chk2 Signalwegaktivierung hatte.rnZusammengefasst wurde festgestellt, dass die Exposition mit ZnO NP in der Entstehung von ROS, reduzierter Viabilität und vermindertem mitochondrialem Membranpotential resultiert, sowie zeitabhängig eine frühe Apoptose initiiert. ZnO NP dissoziierten extrazellulär und wurden schnell als Zn2+ über unbekannte Mechanismen internalisiert. Die Zn2+-Ionen wurden im Zytoplasma, sowie besonders in den Mitochondrien und dem Zellkern, akkumuliert. Die DDR Signalgebung wurde durch ZnO NP aktiviert, jedoch nicht durch NAC inhibiert. Es wurde gezeigt, dass DTPA die DDR Aktivierung komplett inhibierte. Die Behandlung mit ZnO NP induzierte DNA DSB. Die Inhibition von ROS reduzierte die DNA DSB und die Komplexierung der Zn2+ verhinderte die Entstehung von DNA DSB.rnDiese Daten sprechen für die Dissoziation der Partikel und die hierbei freigesetzten Zn2+ als Hauptmediator der Genotoxizität metallischer Nanopartikel. rn
Resumo:
Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.
Resumo:
With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
Background Leg edema is a common manifestation of various underlying pathologies. Reliable measurement tools are required to quantify edema and monitor therapeutic interventions. Aim of the present work was to investigate the reproducibility of optoelectronic leg volumetry over 3 weeks' time period and to eliminate daytime related within-individual variability. Methods Optoelectronic leg volumetry was performed in 63 hairdressers (mean age 45 ± 16 years, 85.7% female) in standing position twice within a minute for each leg and repeated after 3 weeks. Both lower leg (legBD) and whole limb (limbBF) volumetry were analysed. Reproducibility was expressed as analytical and within-individual coefficients of variance (CVA, CVW), and as intra-class correlation coefficients (ICC). Results A total of 492 leg volume measurements were analysed. Both legBD and limbBF volumetry were highly reproducible with CVA of 0.5% and 0.7%, respectively. Within-individual reproducibility of legBD and limbBF volumetry over a three weeks' period was high (CVW 1.3% for both; ICC 0.99 for both). At both visits, the second measurement revealed a significantly higher volume compared to the first measurement with a mean increase of 7.3 ml ± 14.1 (0.33% ± 0.58%) for legBD and 30.1 ml ± 48.5 ml (0.52% ± 0.79%) for limbBF volume. A significant linear correlation between absolute and relative leg volume differences and the difference of exact day time of measurement between the two study visits was found (P < .001). A therefore determined time-correction formula permitted further improvement of CVW. Conclusions Leg volume changes can be reliably assessed by optoelectronic leg volumetry at a single time point and over a 3 weeks' time period. However, volumetry results are biased by orthostatic and daytime-related volume changes. The bias for day-time related volume changes can be minimized by a time-correction formula.
Resumo:
Acid dissociation constants, or pKa values, are essential for understanding many fundamental reactions in chemistry. These values reveal the deprotonation state of a molecule in a particular solvent. There is great interest in using theoretical methods to calculate the pKa values for many different types of molecules. These include molecules that have not been synthesized, those for which experimental pKa determinations are difficult, and for larger molecules where the local environment changes the usual pKa values, such as for certain amino acids that are part of a larger polypeptide chain. Chemical accuracy in pKa calculations is difficult to achieve, because an error of 1.36 kcal/mol in the change of free energy for deprotonation in solvent results in an error of 1 pKa unit. In this review the most valuable methods for determining accurate pKa values in aqueous solution are presented for educators interested in explaining or using these methods for their students.
Resumo:
The purpose of this study is to examine the role of vocational rehabilitation services in contributing to the goals of the National HIV/AIDS strategy. Three key research questions are addressed: (a) What is the relationship among factors associated with the use of vocational rehabilitation services for people living with HIV/AIDS? (b) Are the factors associated with use of vocational rehabilitation also associated with access to health care, supplemental employment services and reduced risk of HIV transmission? And (c) What unique role does use of vocational rehabilitation services play in access to health care and HIV prevention? Survey research methods were used to collect data from a broad sample of volunteer respondents who represented diverse racial (37% Black, 37% White, 18% Latino, 7% other), gender (65% male, 34% female, 1% transgender) and sexual orientation (48% heterosexual, 44% gay, 8% bisexual) backgrounds. The fit of the final structural equation model was good (root mean square error of approximation = .055, Comparative Fit Index=.953, Tucker Lewis Index=.945). Standardized effects with bootstrap confidence intervals are reported. Overall, the findings support the hypothesis that vocational rehabilitation services can play an important role in health and prevention strategies outlined in the National HIV/AIDS strategy.
Resumo:
To enhance understanding of the metabolic indicators of type 2 diabetes mellitus (T2DM) disease pathogenesis and progression, the urinary metabolomes of well characterized rhesus macaques (normal or spontaneously and naturally diabetic) were examined. High-resolution ultra-performance liquid chromatography coupled with the accurate mass determination of time-of-flight mass spectrometry was used to analyze spot urine samples from normal (n = 10) and T2DM (n = 11) male monkeys. The machine-learning algorithm random forests classified urine samples as either from normal or T2DM monkeys. The metabolites important for developing the classifier were further examined for their biological significance. Random forests models had a misclassification error of less than 5%. Metabolites were identified based on accurate masses (<10 ppm) and confirmed by tandem mass spectrometry of authentic compounds. Urinary compounds significantly increased (p < 0.05) in the T2DM when compared with the normal group included glycine betaine (9-fold), citric acid (2.8-fold), kynurenic acid (1.8-fold), glucose (68-fold), and pipecolic acid (6.5-fold). When compared with the conventional definition of T2DM, the metabolites were also useful in defining the T2DM condition, and the urinary elevations in glycine betaine and pipecolic acid (as well as proline) indicated defective re-absorption in the kidney proximal tubules by SLC6A20, a Na(+)-dependent transporter. The mRNA levels of SLC6A20 were significantly reduced in the kidneys of monkeys with T2DM. These observations were validated in the db/db mouse model of T2DM. This study provides convincing evidence of the power of metabolomics for identifying functional changes at many levels in the omics pipeline.
Resumo:
With improvements in acquisition speed and quality, the amount of medical image data to be screened by clinicians is starting to become challenging in the daily clinical practice. To quickly visualize and find abnormalities in medical images, we propose a new method combining segmentation algorithms with statistical shape models. A statistical shape model built from a healthy population will have a close fit in healthy regions. The model will however not fit to morphological abnormalities often present in the areas of pathologies. Using the residual fitting error of the statistical shape model, pathologies can be visualized very quickly. This idea is applied to finding drusen in the retinal pigment epithelium (RPE) of optical coherence tomography (OCT) volumes. A segmentation technique able to accurately segment drusen in patients with age-related macular degeneration (AMD) is applied. The segmentation is then analyzed with a statistical shape model to visualize potentially pathological areas. An extensive evaluation is performed to validate the segmentation algorithm, as well as the quality and sensitivity of the hinting system. Most of the drusen with a height of 85.5 microm were detected, and all drusen at least 93.6 microm high were detected.
Resumo:
Optical coherence tomography (OCT) is a well-established image modality in ophthalmology and used daily in the clinic. Automatic evaluation of such datasets requires an accurate segmentation of the retinal cell layers. However, due to the naturally low signal to noise ratio and the resulting bad image quality, this task remains challenging. We propose an automatic graph-based multi-surface segmentation algorithm that internally uses soft constraints to add prior information from a learned model. This improves the accuracy of the segmentation and increase the robustness to noise. Furthermore, we show that the graph size can be greatly reduced by applying a smart segmentation scheme. This allows the segmentation to be computed in seconds instead of minutes, without deteriorating the segmentation accuracy, making it ideal for a clinical setup. An extensive evaluation on 20 OCT datasets of healthy eyes was performed and showed a mean unsigned segmentation error of 3.05 ±0.54 μm over all datasets when compared to the average observer, which is lower than the inter-observer variability. Similar performance was measured for the task of drusen segmentation, demonstrating the usefulness of using soft constraints as a tool to deal with pathologies.
Resumo:
Data on antimicrobial use play a key role in the development of policies for the containment of antimicrobial resistance. On-farm data could provide a detailed overview of the antimicrobial use, but technical and methodological aspects of data collection and interpretation, as well as data quality need to be further assessed. The aims of this study were (1) to quantify antimicrobial use in the study population using different units of measurement and contrast the results obtained, (2) to evaluate data quality of farm records on antimicrobial use, and (3) to compare data quality of different recording systems. During 1 year, data on antimicrobial use were collected from 97 dairy farms. Antimicrobial consumption was quantified using: (1) the incidence density of antimicrobial treatments; (2) the weight of active substance; (3) the used daily dose and (4) the used course dose for antimicrobials for intestinal, intrauterine and systemic use; and (5) the used unit dose, for antimicrobials for intramammary use. Data quality was evaluated by describing completeness and accuracy of the recorded information, and by comparing farmers' and veterinarians' records. Relative consumption of antimicrobials depended on the unit of measurement: used doses reflected the treatment intensity better than weight of active substance. The use of antimicrobials classified as high priority was low, although under- and overdosing were frequently observed. Electronic recording systems allowed better traceability of the animals treated. Recording drug name or dosage often resulted in incomplete or inaccurate information. Veterinarians tended to record more drugs than farmers. The integration of veterinarian and farm data would improve data quality.