949 resultados para measurement method
Resumo:
Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.
Resumo:
Pre-publication drafts are reproduced with permission and copyright © 2013 of the Journal of Orthopaedic Trauma [Mutch J, Rouleau DM, Laflamme GY, Hagemeister N. Accurate Measurement of Greater Tuberosity Displacement without Computed Tomography: Validation of a method on Plain Radiography to guide Surgical Treatment. J Orthop Trauma. 2013 Nov 21: Epub ahead of print.] and copyright © 2014 of the British Editorial Society of Bone and Joint Surgery [Mutch JAJ, Laflamme GY, Hagemeister N, Cikes A, Rouleau DM. A new morphologic classification for greater tuberosity fractures of the proximal humerus: validation and clinical Implications. Bone Joint J 2014;96-B:In press.]
Resumo:
Purpose: To analyse the relationship between measured intraocular pressure (IOP) and central corneal thickness (CCT), corneal hysteresis (CH) and corneal resistance factor (CRF) in ocular hypertension (OHT), primary open-angle (POAG) and normal tension glaucoma (NTG) eyes using multiple tonometry devices. Methods: Right eyes of patients diagnosed with OHT (n=47), normal tension glaucoma (n=17) and POAG (n=50) were assessed, IOP was measured in random order with four devices: Goldmann applanation tonometry (GAT); Pascal(R) dynamic contour tonometer (DCT); Reichert(R) ocular response analyser (ORA); and Tono-Pen(R) XL. CCT was then measured using a hand-held ultrasonic pachymeter. CH and CRF were derived from the air pressure to corneal reflectance relationship of the ORA data. Results: Compared to the GAT, the Tonopen and ORA Goldmann equivalent (IOPg) and corneal compensated (IOPcc) measured higher IOP readings (F=19.351, p<0.001), particularly in NTG (F=12.604, p<0.001). DCT was closest to Goldmann IOP and had the lowest variance. CCT was significantly different (F=8.305, p<0.001) between the 3 conditions as was CH (F=6.854, p=0.002) and CRF (F=19.653, p<0.001). IOPcc measures were not affected by CCT. The DCT was generally not affected by corneal biomechanical factors. Conclusion: This study suggests that as the true pressure of the eye cannot be determined non-invasively, measurements from any tonometer should be interpreted with care, particularly when alterations in the corneal tissue are suspected.
Resumo:
The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.
Resumo:
A phantom that can be used for mapping geometric distortion in magnetic resonance imaging (MRI) is described. This phantom provides an array of densely distributed control points in three-dimensional (3D) space. These points form the basis of a comprehensive measurement method to correct for geometric distortion in MR images arising principally from gradient field non-linearity and magnet field inhomogeneity. The phantom was designed based on the concept that a point in space can be defined using three orthogonal planes. This novel design approach allows for as many control points as desired. Employing this novel design, a highly accurate method has been developed that enables the positions of the control points to be measured to sub-voxel accuracy. The phantom described in this paper was constructed to fit into a body coil of a MRI scanner, (external dimensions of the phantom were: 310 mm x 310 mm x 310 mm), and it contained 10,830 control points. With this phantom, the mean errors in the measured coordinates of the control points were on the order of 0.1 mm or less, which were less than one tenth of the voxel's dimensions of the phantom image. The calculated three-dimensional distortion map, i.e., the differences between the image positions and true positions of the control points, can then be used to compensate for geometric distortion for a full image restoration. It is anticipated that this novel method will have an impact on the applicability of MRI in both clinical and research settings. especially in areas where geometric accuracy is highly required, such as in MR neuro-imaging. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
In magnetic resonance imaging (MRI), the MR signal intensity can vary spatially and this spatial variation is usually referred to as MR intensity nonuniformity. Although the main source of intensity nonuniformity arises from B, inhomogeneity of the coil acting as a receiver and/or transmitter, geometric distortion also alters the MR signal intensity. It is useful on some occasions to have these two different sources be separately measured and analyzed. In this paper, we present a practical method for a detailed measurement of the MR intensity nonuniformity. This method is based on the same three-dimensional geometric phantom that was recently developed for a complete measurement of the geometric distortion in MR systems. In this paper, the contribution to the intensity nonuniformity from the geometric distortion can be estimated and thus, it provides a mechanism for estimation of the intensity nonuniformity that reflects solely the spatial characteristics arising from B-1. Additionally, a comprehensive scheme for characterization of the intensity nonuniformity based on the new measurement method is proposed. To demonstrate the method, the intensity nonuniformity in a 1.5 T Sonata MR system was measured and is used to illustrate the main features of the method. (c) 2005 American Association of Physicists in Medicine.
Resumo:
Measuring the extent to which a piece of structural timber has distorted at a macroscopic scale is fundamental to assessing its viability as a structural component. From the sawmill to the construction site, as structural timber dries, distortion can render it unsuitable for its intended purposes. This rejection of unusable timber is a considerable source of waste to the timber industry and the wider construction sector. As such, ensuring accurate measurement of distortion is a key step in addressing ineffciencies within timber processing. Currently, the FRITS frame method is the established approach used to gain an understanding of timber surface profile. The method, while reliable, is dependent upon relatively few measurements taken across a limited area of the overall surface, with a great deal of interpolation required. Further, the process is unavoidably slow and cumbersome, the immobile scanning equipment limiting where and when measurements can be taken and constricting the process as a whole. This thesis seeks to introduce LiDAR scanning as a new, alternative approach to distortion feature measurement. In its infancy as a measurement technique within timber research, the practicalities of using LiDAR scanning as a measurement method are herein demonstrated, exploiting many of the advantages the technology has over current approaches. LiDAR scanning creates a much more comprehensive image of a timber surface, generating input data multiple magnitudes larger than that of the FRITS frame. Set-up and scanning time for LiDAR is also much quicker and more flexible than existing methods. With LiDAR scanning the measurement process is freed from many of the constraints of the FRITS frame and can be done in almost any environment. For this thesis, surface scans were carried out on seven Sitka spruce samples of dimensions 48.5x102x3000mm using both the FRITS frame and LiDAR scanner. The samples used presented marked levels of distortion and were relatively free from knots. A computational measurement model was created to extract feature measurements from the raw LiDAR data, enabling an assessment of each piece of timber to be carried out in accordance with existing standards. Assessment of distortion features focused primarily on the measurement of twist due to its strong prevalence in spruce and the considerable concern it generates within the construction industry. Additional measurements of surface inclination and bow were also made with each method to further establish LiDAR's credentials as a viable alternative. Overall, feature measurements as generated by the new LiDAR method compared well with those of the established FRITS method. From these investigations recommendations were made to address inadequacies within existing measurement standards, namely their reliance on generalised and interpretative descriptions of distortion. The potential for further uses of LiDAR scanning within timber researches was also discussed.
Resumo:
Congenital vertebral malformations are common in brachycephalic “screw-tailed” dog breeds such as French bulldogs, English bulldogs, Boston terriers, and Pugs. Those vertebral malformations disrupt the normal vertebral column anatomy and biomechanics, potentially leading to deformity of the vertebral column and subsequent neurological dysfunction. The initial aim of this work was to study and determine whether the congenital vertebral malformations identified in those breeds could be translated in a radiographic classification scheme used in humans to give an improved classification, with clear and well-defined terminology, with the expectation that this would facilitate future study and clinical management in the veterinary field. Therefore, two observers who were blinded to the neurologic status of the dogs classified each vertebral malformation based on the human classification scheme of McMaster and were able to translate them successfully into a new classification scheme for veterinary use. The following aim was to assess the nature and the impact of vertebral column deformity engendered by those congenital vertebral malformations in the target breeds. As no gold standard exists in veterinary medicine for the calculation of the degree of deformity, it was elected to adapt the human equivalent, termed the Cobb angle, as a potential standard reference tool for use in veterinary practice. For the validation of the Cobb angle measurement method, a computerised semi-automatic technique was used and assessed by multiple independent observers. They observed not only that Kyphosis was the most common vertebral column deformity but also that patients with such deformity were found to be more likely to suffer from neurological deficits, more especially if their Cobb angle was above 35 degrees.
Resumo:
The dorsolateral prefrontal cortex (DLPFC) has been implicated in the pathophysiology of mental disorders. Previous region-of-interest MRI studies that attempted to delineate this region adopted various landmarks and measurement techniques, with inconsistent results. We developed a new region-of-interest measurement method to obtain morphometric data of this region from structural MRI scans, taking into account knowledge from cytoarchitectonic postmortem studies and the large inter-individual variability of this region. MRI scans of 10 subjects were obtained, and DLPFC tracing was performed in the coronal plane by two independent raters using the semi-automated software Brains2. The intra-class correlation coefficients between two independent raters were 0.94 for the left DLPFC and 0.93 for the right DLPFC. The mean +/- S.D. DLPFC volumes were 9.23 +/- 2.35 ml for the left hemisphere and 8.20 +/- 2.08 ml for the right hemisphere. Our proposed method has high inter-rater reliability and is easy to implement, permitting the standardized measurement of this region for clinical research applications. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Adhesive bonding is nowadays a serious candidate to replace methods such as fastening or riveting, because of attractive mechanical properties. As a result, adhesives are being increasingly used in industries such as the automotive, aerospace and construction. Thus, it is highly important to predict the strength of bonded joints to assess the feasibility of joining during the fabrication process of components (e.g. due to complex geometries) or for repairing purposes. This work studies the tensile behaviour of adhesive joints between aluminium adherends considering different values of adherend thickness (h) and the double-cantilever beam (DCB) test. The experimental work consists of the definition of the tensile fracture toughness (GIC) for the different joint configurations. A conventional fracture characterization method was used, together with a J-integral approach, that take into account the plasticity effects occurring in the adhesive layer. An optical measurement method is used for the evaluation of crack tip opening and adherends rotation at the crack tip during the test, supported by a Matlab® sub-routine for the automated extraction of these quantities. As output of this work, a comparative evaluation between bonded systems with different values of adherend thickness is carried out and complete fracture data is provided in tension for the subsequent strength prediction of joints with identical conditions.
Resumo:
Adhesive bonding is an excellent alternative to traditional joining techniques such as welding, mechanical fastening or riveting. However, there are many factors that have to be accounted for during joint design to accurately predict the joint strength. One of these is the adhesive layer thickness (tA). Most of the results are for epoxy structural adhesives, tailored to perform best with small values of tA, and these show that the lap joint strength decreases with increase of tA (the optimum joint strength is usually obtained with tA values between 0.1 and 0.2 mm). Recently, polyurethane adhesives were made available in the market, designed to perform with larger tA values, and whose fracture behaviour is still not studied. In this work, the effect of tA on the tensile fracture toughness (View the MathML source) of a bonded joint is studied, considering a novel high strength and ductile polyurethane adhesive for the automotive industry. This work consists on the fracture characterization of the bond by a conventional and the J-integral techniques, which accurately account for root rotation effects. An optical measurement method is used for the evaluation of crack tip opening (δn) and adherends rotation at the crack tip (θo) during the test, supported by a Matlab® sub-routine for the automated extraction of these parameters. As output of this work, fracture data is provided in traction for the selected adhesive, enabling the subsequent strength prediction of bonded joints.
Resumo:
The centrifugal liquid membrane (CLM) cell has been utilized for chiroptical studies of liquid-liquid interfaces with a conventional circular dichroism (CD) spectropolarimeter. These studies required the characterization of optical properties of the rotating cylindrical CLM glass cell, which was used under the high speed rotation. In the present study, we have measured the circular and linear dichroism (CD and LD) spectra and the circular and linear birefringence (CB and LB) spectra of the CLM cell itself as well as those of porphyrine aggregates formed at the liquid-liquid interface in the CLM cell, applying Mueller matrix measurement method. From the results, it was confirmed that the CLM-CD spectra of the interfacial porphyrin aggregates observed by a conventional CD spectropolarimeter should be correct irrespective of LD and LB signals in the CLM cell.
Resumo:
Tämän työn tarkoituksena on koota yhteen selluprosessin mittausongelmat ja mahdolliset mittaustekniikat ongelmien ratkaisemiseksi. Pääpaino on online-mittaustekniikoissa. Työ koostuu kolmesta osasta. Ensimmäinen osa on kirjallisuustyö, jossa esitellään nykyaikaisen selluprosessin perusmittaukset ja säätötarpeet. Mukana on koko kuitulinja puunkäsittelystä valkaisuun ja kemikaalikierto: haihduttamo, soodakattila, kaustistamo ja meesauuni. Toisessa osassa mittausongelmat ja mahdolliset mittaustekniikat on koottu yhteen ”tiekartaksi”. Tiedot on koottu vierailemalla kolmella suomalaisella sellutehtaalla ja haastattelemalla laitetekniikka- ja mittaustekniikka-asiantuntijoita. Prosessikemian paremmalle ymmärtämiselle näyttää haastattelun perusteella olevan tarvetta, minkä vuoksi konsentraatiomittaukset on valittu jatkotutkimuskohteeksi. Viimeisessä osassa esitellään mahdollisia mittaustekniikoita konsentraatiomittausten ratkaisemiseksi. Valitut tekniikat ovat lähi-infrapunatekniikka (NIR), fourier-muunnosinfrapunatekniikka (FTIR), online-kapillaarielektroforeesi (CE) ja laserindusoitu plasmaemissiospektroskopia (LIPS). Kaikkia tekniikoita voi käyttää online-kytkettyinä prosessikehitystyökaluina. Kehityskustannukset on arvioitu säätöön kytketylle online-laitteelle. Kehityskustannukset vaihtelevat nollasta miestyövuodesta FTIR-tekniikalle viiteen miestyövuoteen CE-laitteelle; kehityskustannukset riippuvat tekniikan kehitysasteesta ja valmiusasteesta tietyn ongelman ratkaisuun. Työn viimeisessä osassa arvioidaan myös yhden mittausongelman – pesuhäviömittauksen – ratkaisemisen teknis-taloudellista kannattavuutta. Ligniinipitoisuus kuvaisi nykyisiä mittauksia paremmin todellista pesuhäviötä. Nykyään mitataan joko natrium- tai COD-pesuhäviötä. Ligniinipitoisuutta voidaan mitata UV-absorptiotekniikalla. Myös CE-laitetta voitaisiin käyttää pesuhäviön mittauksessa ainakin prosessikehitysvaiheessa. Taloudellinen tarkastelu pohjautuu moniin yksinkertaistuksiin ja se ei sovellu suoraan investointipäätösten tueksi. Parempi mittaus- ja säätöjärjestelmä voisi vakauttaa pesemön ajoa. Investointi ajoa vakauttavaan järjestelmään on kannattavaa, jos todellinen ajotilanne on tarpeeksi kaukana kustannusminimistä tai jos pesurin ajo heilahtelee eli pesuhäviön keskihajonta on suuri. 50 000 € maksavalle mittaus- ja säätöjärjestelmälle saadaan alle 0,5 vuoden takaisinmaksuaika epävakaassa ajossa, jos COD-pesuhäviön vaihteluväli on 5,2 – 11,6 kg/odt asetusarvon ollessa 8,4 kg/odt. Laimennuskerroin vaihtelee tällöin välillä 1,7 – 3,6 m3/odt asetusarvon ollessa 2,5 m3/odt.
Resumo:
The measurement of fluid volumes in cases of pericardial effusion is a necessary procedure during autopsy. With the increased use of virtual autopsy methods in forensics, the need for a quick volume measurement method on computed tomography (CT) data arises, especially since methods such as CT angiography can potentially alter the fluid content in the pericardium. We retrospectively selected 15 cases with hemopericardium, which underwent post-mortem imaging and autopsy. Based on CT data, the pericardial blood volume was estimated using segmentation techniques and downsampling of CT datasets. Additionally, a variety of measures (distances, areas and 3D approximations of the effusion) were examined to find a quick and easy way of estimating the effusion volume. Segmentation of CT images as shown in the present study is a feasible method to measure the pericardial fluid amount accurately. Downsampling of a dataset significantly increases the speed of segmentation without losing too much accuracy. Some of the other methods examined might be used to quickly estimate the severity of the effusion volumes.
Resumo:
Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.