966 resultados para Non-destructive methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tässä työssä tutkittiin orbitaali-TIG-hitsauksen hyödyntämistä paineenalaisten compoundputkien jatkohitsien hitsaamiseen. Tutkimus tehtiin Warkaus Works Oy:n omistamalle konepajalle. Tutkimuksen tavoitteena oli laatia standardin SFS-EN ISO 15613 mukaan hyväksytty hitsausmenetelmä. Työn kirjallisuuskatsauksessa esiteltiin orbitaalihitsauksessa käytettävät hitsausprosessit ja laitteistot eri liitosmuodoille. Työssä esiteltiin myös kuumavedettyjen compoundputkien valmistusprosessia ja rakennetta. Tämän lisäksi kerrottiin putkien hitsattavuudesta ja hitsauksen esivalmisteluista, sekä käyttökohteista voimalaitoskattilassa. Ennen menetelmäkokeen hitsausta suoritettiin suuri määrä koehitsauksia yrityksen orbitaali-TIG-laitteistolla. Onnistuneiden koehitsausten perusteella laadittiin hitsausohjelma, jolla hitsattiin testiputket menetelmäkokeeseen. Hitsausmenetelmää varten valittiin Sandvikin valmistama compoundputki. Testihitseille suoritettiin rikkomattomia ja rikkovia tarkastuksia. Hyväksyttyjen tarkastusten pohjalta laadittiin hitsausmenetelmän hyväksymispöytäkirjan. Lisäksi diplomityössä verrattiin mekanisoidun hitsauksen hitsausaikaa käsinhitsauksen hitsausaikaa. Orbitaali-TIG-hitsauksen käyttäminen edellyttää hitsattavalta railolta aina mahdollisimman samanlaisen lähtötilanteen, jotta sen suoritusvarmuus saadaan mahdollisimman korkeaksi. Railon valmistamisen tarkkuudella on mahdollista saada riittävän tarkkoja railoja hitsien laadukkaaseen hitsaukseen. Hitsaus edellyttää laitteiston langansyötön ja elektrodin tarkkoja asetuksia, jotta hitsaus on laadukasta. Lisäaineen jähmeä sula ja hitsin tunkeumavaatimus hankaloittavat orbitaalihitsausta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En raison de la grande résolution des photographies des échantillons, celles-ci se trouvent dans un fichier complémentaire, puisque les conditions de forme imposées ne permettaient pas l'affichage intégral de ces images au sein du mémoire.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Photothermal effect refers to heating of a sample due to the absorption of electromagnetic radiation. Photothermal (PT) heat generation which is an example of energy conversion has in general three kinds of applications. 1. PT material probing 2. PT material processing and 3. PT material destruction. The temperatures involved increases from 1-. 3. Of the above three, PT material probing is the most important in making significant contribution to the field of science and technology. Photothermal material characterization relies on high sensitivity detection techniques to monitor the effects caused by PT material heating of a sample. Photothermal method is a powerful high sensitivity non-contact tool used for non-destructive thermal characterization of materials. The high sensitivity of the photothermal methods has led to its application for analysis of low absorbance samples. Laser calorimetry, photothermal radiometry, pyroelectric technique, photoacoustic technique, photothermal beam deflection technique, etc. come under the broad class ofphotothermal techniques. However the choice of a suitable technique depends upon the nature of the sample, purpose of measurement, nature of light source used, etc. The present investigations are done on polymer thin films employing photothermal beam deflection technique, for the successful determination of their thermal diffusivity. Here the sample is excited by a He-Ne laser (A = 6328...\ ) which acts as the pump beam. Due to the refractive index gradient established in the sample surface and in the adjacent coupling medium, another optical beam called probe beam (diode laser, A= 6500A ) when passed through this region experiences a deflection and is detected using a position sensitive detector and its output is fed to a lock-in amplifier from which the amplitude and phase of the deflection can be directly obtained. The amplitude and phase of the signal is suitably analysed for determining the thermal diffusivity.The production of polymer thin film samples has gained considerable attention for the past few years. Plasma polymerization is an inexpensive tool for fabricating organic thin films. It refers to formation of polymeric materials under the influence of plasma, which is generated by some kind of electric discharge. Here plasma of the monomer vapour is generated by employing radio frequency (MHz) techniques. Plasma polymerization technique results in homogeneous, highly adhesive, thermally stable, pinhole free, dielectric, highly branched and cross-linked polymer films. The possible linkage in the formation of the polymers is suggested by comparing the FTIR spectra of the monomer and the polymer.Near IR overtone investigations on some organic molecules using local mode model are also done. Higher vibrational overtones often provide spectral simplification and greater resolution of peaks corresponding to nonequivalent X-H bonds where X is typically C, N or O. Vibrational overtone spectroscopy of molecules containing X-H oscillators is now a well established tool for molecular investigations. Conformational and steric differences between bonds and structural inequivalence ofCH bonds (methyl, aryl, acetylenic, etc.) are resolvable in the higher overtone spectra. The local mode model in which the X-H oscillators are considered to be loosely coupled anharmonic oscillators has been widely used for the interpretation of overtone spectra. If we are exciting a single local oscillator from the vibrational ground state to the vibrational state v, then the transition energy of the local mode overtone is given by .:lE a......v = A v + B v2 • A plot of .:lE / v versus v will yield A, the local mode frequency as the intercept and B, the local mode diagonal anharmonicity as the slope. Here A - B gives the mechanical frequency XI of the oscillator and B = X2 is the anharmonicity of the bond. The local mode parameters XI and X2 vary for non-equivalent X-H bonds and are sensitive to the inter and intra molecular environment of the X-H oscillator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among the large number of photothcrmal techniques available, photoacoustics assumes a very significant place because of its essential simplicity and the variety of applications it finds in science and technology. The photoacoustic (PA) effect is the generation of an acoustic signal when a sample, kept inside an enclosed volume, is irradiated by an intensity modulated beam of radiation. The radiation absorbed by the sample is converted into thermal waves by nonradiative de-excitation processes. The propagating thermal waves cause a corresponding expansion and contraction of the gas medium surrounding the sample, which in tum can be detected as sound waves by a sensitive microphone. These sound waves have the same frequency as the initial modulation frequency of light. Lock-in detection method enables one to have a sufficiently high signal to noise ratio for the detected signal. The PA signal amplitude depends on the optical absorption coefficient of the sample and its thermal properties. The PA signal phase is a function of the thermal diffusivity of the sample.Measurement of the PA amplitude and phase enables one to get valuable information about the thermal and optical properties of the sample. Since the PA signal depends on the optical and thennal properties of the sample, their variation will get reflected in the PA signal. Therefore, if the PA signal is collected from various points on a sample surface it will give a profile of the variations in the optical/thennal properties across the sample surface. Since the optical and thermal properties are affected by the presence of defects, interfaces, change of material etc. these will get reflected in the PA signal. By varying the modulation frequency, we can get information about the subsurface features also. This is the basic principle of PA imaging or PA depth profiling. It is a quickly expanding field with potential applications in thin film technology, chemical engineering, biology, medical diagnosis etc. Since it is a non-destructive method, PA imaging has added advantages over some of the other imaging techniques. A major part of the work presented in this thesis is concemed with the development of a PA imaging setup that can be used to detect the presence of surface and subsmface defects in solid samples.Determination of thermal transport properties such as thermal diffusivity, effusivity, conductivity and heat capacity of materials is another application of photothennal effect. There are various methods, depending on the nature of the sample, to determine these properties. However, there are only a few methods developed to determine all these properties simultaneously. Even though a few techniques to determine the above thermal properties individually for a coating can be found in literature, no technique is available for the simultaneous measurement of these parameters for a coating. We have developed a scanning photoacoustic technique that can be used to determine all the above thermal transport properties simultaneously in the case of opaque coatings such as paints. Another work that we have presented in this thesis is the determination of thermal effusivity of many bulk solids by a scanning photoacoustic technique. This is one of the very few methods developed to determine thermal effiisivity directly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electroanalytical techniques represent a class of powerful and versatile analytical method which is based on the electrical properties of a solution of the analyte when it is made part of an electrochemical cell. They offer high sensitivity, accuracy, precision and a large linear dynamic range. The cost of instrumentation is relatively low compared to other instrumental methods of analysis. Many solid state electrochemical sensors have been commercialised nowadays. Potentiometry is a very simple electroanalytical technique with extraordinary analytical capabilities. Since valinomycin was introduced as an ionophore for K+, Ion Selective Electrodes have become one of the best studied and understood analytical devices. It can be used for the determination of substances ranging from simple inorganic ions to complex organic molecules. It is a very attractive option owing to the wide range of applications and ease of the use of the instruments employed. They also possess the advantages of short response time, high selectivity and very low detection limits. Moreover, analysis by these electrodes is non-destructive and adaptable to small sample volumes. It has become a standard technique for medical researchers, biologists, geologists and environmental specialists. This thesis presents the synthesis and characterisation of five ionophores. Based on these ionophores, nine potentiometric sensors are fabricated for the determination of ions such as Pb2+, Mn2+, Ni2+, Cu2+ and Sal- ion (Salicylate ion). The electrochemical characterisation and analytical application studies of the developed sensors are also described. The thesis is divided into eight chapters

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The motivatitni for" the present work is from .a project sanctioned by TSRO. The work involved the development of a quick and reliable test procedure using microwaves, for tflue inspection of cured propellant samples and a method to monitor the curing conditions of propellant mix undergoing the curing process.Normal testing CHE the propellant samples involvecuttimg a piece from each carton and testing it for their tensile strength. The values are then compared with standard ones and based on this result the sample isaccepted or rejected. The tensile strength is a measure ofdegree of cure of the propellant mix. But this measurementis a destructive procedure as it involves cutting of the sample. Moreover, it does not guarantee against nonuniform curing due to power failure, hot air-line failure,operator error etc. This necessitated the need for the development of a quick and reliable non-destructive test procedure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As the ideal method of assessing the nutritive value of a feedstuff, namely offering it to the appropriate class of animal and recording the production response obtained, is neither practical nor cost effective a range of feed evaluation techniques have been developed. Each of these balances some degree of compromise with the practical situation against data generation. However, due to the impact of animal-feed interactions over and above that of feed composition, the target animal remains the ultimate arbitrator of nutritional value. In this review current in vitro feed evaluation techniques are examined according to the degree of animal-feed interaction. Chemical analysis provides absolute values and therefore differs from the majority of in vitro methods that simply rank feeds. However, with no host animal involvement, estimates of nutritional value are inferred by statistical association. In addition given the costs involved, the practical value of many analyses conducted should be reviewed. The in sacco technique has made a substantial contribution to both understanding rumen microbial degradative processes and the rapid evaluation of feeds, especially in developing countries. However, the numerous shortfalls of the technique, common to many in vitro methods, the desire to eliminate the use of surgically modified animals for routine feed evaluation, paralleled with improvements in in vitro techniques, will see this technique increasingly replaced. The majority of in vitro systems use substrate disappearance to assess degradation, however, this provides no information regarding the quantity of derived end-products available to the host animal. As measurement of volatile fatty acids or microbial biomass production greatly increases analytical costs, fermentation gas release, a simple and non-destructive measurement, has been used as an alternative. However, as gas release alone is of little use, gas-based systems, where both degradation and fermentation gas release are measured simultaneously, are attracting considerable interest. Alternative microbial inocula are being considered, as is the potential of using multi-enzyme systems to examine degradation dynamics. It is concluded that while chemical analysis will continue to form an indispensable part of feed evaluation, enhanced use will be made of increasingly complex in vitro systems. It is vital, however, the function and limitations of each methodology are fully understood and that the temptation to over-interpret the data is avoided so as to draw the appropriate conclusions. With careful selection and correct application in vitro systems offer powerful research tools with which to evaluate feedstuffs. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The aim of this study was to evaluate root coverage of gingival recessions and to compare graft vascularization in smokers and non-smokers. Methods: Thirty subjects, 15 smokers and 15 non-smokers, were selected. Each subject had one Miller Class I or II recession in a non-molar tooth. Clinical measurements of probing depth (PD), relative clinical attachment level (CAL), gingival recession (GR), and width of keratinized tissue (KT) were determined at baseline and 3 and 6 months after surgery. The recessions were treated surgically with a coronally positioned flap associated with a subepithelial connective tissue graft. A small portion of this graft was prepared for immunohistochemistry. Blood vessels were identified and counted by expression of factor VIII-related antigen-stained endothelial cells. Results: Intragroup analysis showed that after 6 months there a was gain in CAL, a decrease in GR, and an increase in KT for both groups (P<0.05), whereas changes in PD were not statistically significant. Smokers had less root coverage than non-smokers (58.02% +/- 19.75% versus 83.35% +/- 18.53%; P<0.05). Furthermore, the smokers had more GR (1.48 +/- 0.79 mm versus 0.52 +/- 0.60 mm) than the nonsmokers (P<0.05). Histomorphometry of the donor tissue revealed a blood vessel density of 49.01 +/- 11.91 vessels/200x field for non-smokers and 36.53 +/- 10.23 vessels/200x field for smokers (P<0.05). Conclusion: Root coverage with subepithelial connective tissue graft was negatively affected by smoking, which limited and jeopardized treatment results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thermal decomposition of salbutamol (beta(2) - selective adrenoreceptor) was studied using differential scanning calorimetry (DSC) and thermogravimetry/derivative thermogravimetry (TG/DTG). It was observed that the commercial sample showed a different thermal profile than the standard sample caused by the presence of excipients. These compounds increase the thermal stability of the drug. Moreover, higher activation energy was calculated for the pharmaceutical sample, which was estimated by isothermal and non-isothermal methods for the first stage of the thermal decomposition process. For isothermal experiments the average values were E(act) = 130 kJ mol(-1) (for standard sample) and E(act) = 252 kJ mol(-1) (for pharmaceutical sample) in a dynamic nitrogen atmosphere (50 mL min(-1)). For non-isothermal method, activation energy was obtained from the plot of log heating rates vs. 1/T in dynamic air atmosphere (50 mL min(-1)). The calculated values were E(act) = 134 kJ mol(-1) (for standard sample) and E(act) (=) 139 kJ mol(-1) (for pharmaceutical sample).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the last decade the problem of surface inspection has been receiving great attention from the scientific community, the quality control and the maintenance of products are key points in several industrial applications.The railway associations spent much money to check the railway infrastructure. The railway infrastructure is a particular field in which the periodical surface inspection can help the operator to prevent critical situations. The maintenance and monitoring of this infrastructure is an important aspect for railway association.That is why the surface inspection of railway also makes importance to the railroad authority to investigate track components, identify problems and finding out the way that how to solve these problems. In railway industry, usually the problems find in railway sleepers, overhead, fastener, rail head, switching and crossing and in ballast section as well. In this thesis work, I have reviewed some research papers based on AI techniques together with NDT techniques which are able to collect data from the test object without making any damage. The research works which I have reviewed and demonstrated that by adopting the AI based system, it is almost possible to solve all the problems and this system is very much reliable and efficient for diagnose problems of this transportation domain. I have reviewed solutions provided by different companies based on AI techniques, their products and reviewed some white papers provided by some of those companies. AI based techniques likemachine vision, stereo vision, laser based techniques and neural network are used in most cases to solve the problems which are performed by the railway engineers.The problems in railway handled by the AI based techniques performed by NDT approach which is a very broad, interdisciplinary field that plays a critical role in assuring that structural components and systems perform their function in a reliable and cost effective fashion. The NDT approach ensures the uniformity, quality and serviceability of materials without causing any damage of that materials is being tested. This testing methods use some way to test product like, Visual and Optical testing, Radiography, Magnetic particle testing, Ultrasonic testing, Penetrate testing, electro mechanic testing and acoustic emission testing etc. The inspection procedure has done periodically because of better maintenance. This inspection procedure done by the railway engineers manually with the aid of AI based techniques.The main idea of thesis work is to demonstrate how the problems can be reduced of thistransportation area based on the works done by different researchers and companies. And I have also provided some ideas and comments according to those works and trying to provide some proposal to use better inspection method where it is needed.The scope of this thesis work is automatic interpretation of data from NDT, with the goal of detecting flaws accurately and efficiently. AI techniques such as neural networks, machine vision, knowledge-based systems and fuzzy logic were applied to a wide spectrum of problems in this area. Another scope is to provide an insight into possible research methods concerning railway sleeper, fastener, ballast and overhead inspection by automatic interpretation of data.In this thesis work, I have discussed about problems which are arise in railway sleepers,fastener, and overhead and ballasted track. For this reason I have reviewed some research papers related with these areas and demonstrated how their systems works and the results of those systems. After all the demonstrations were taking place of the advantages of using AI techniques in contrast with those manual systems exist previously.This work aims to summarize the findings of a large number of research papers deploying artificial intelligence (AI) techniques for the automatic interpretation of data from nondestructive testing (NDT). Problems in rail transport domain are mainly discussed in this work. The overall work of this paper goes to the inspection of railway sleepers, fastener, ballast and overhead.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerosas pesquisas têm estudado os métodos não-destrutivos de avaliação de materiais e sua aplicação àqueles de matrizes complexas, como é o caso da madeira. Um dos primeiros métodos não-destrutivos investigados para aplicação nesses casos foi o da vibração transversal. Apesar de sua concepção simples, e a despeito dos grandes avanços obtidos nessa área com outros métodos, como, por exemplo, o ultra-som, o método de vibração transversal para a determinação do módulo de elasticidade da madeira revela-se como de grande potencial de aplicação, sobretudo pela precisão do modelo matemático a ele associado e pela possibilidade de sua aplicação a peças de dimensões estruturais (in-grade testing). Neste trabalho, apresenta-se o uso desse método na determinação do módulo de elasticidade de três espécies de eucalipto. Foram ensaiados não-destrutivamente e por ensaios mecânicos convencionais de flexão corpos-de-prova de 2 cm x 2 cm x 46 cm de E. grandis, E. saligna e E. citriodora. Os ensaios não-destrutivos foram conduzidos com uso do sistema BING - Beam Identification by Non-destructive Grading, que permite a análise das vibrações do material nos domínios do tempo e da freqüência. Os resultados obtidos revelaram boa correlação entre os dois tipos de ensaios empregados, justificando o início dos ensaios com peças de dimensões estruturais, para a viabilização da técnica nas práticas de classificação estrutural.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Currently there is still a high demand for quality control in manufacturing processes of mechanical parts. This keeps alive the need for the inspection activity of final products ranging from dimensional analysis to chemical composition of products. Usually this task may be done through various nondestructive and destructive methods that ensure the integrity of the parts. The result generated by these modern inspection tools ends up not being able to geometrically define the real damage and, therefore, cannot be properly displayed on a computing environment screen. Virtual 3D visualization may help identify damage that would hardly be detected by any other methods. One may find some commercial softwares that seek to address the stages of a design and simulation of mechanical parts in order to predict possible damages trying to diminish potential undesirable events. However, the challenge of developing softwares capable of integrating the various design activities, product inspection, results of non-destructive testing as well as the simulation of damage still needs the attention of researchers. This was the motivation to conduct a methodological study for implementation of a versatile CAD/CAE computer kernel capable of helping programmers in developing softwares applied to the activities of design and simulation of mechanics parts under stress. In this research it is presented interesting results obtained from the use of the developed kernel showing that it was successfully applied to case studies of design including parts presenting specific geometries, namely: mechanical prostheses, heat exchangers and piping of oil and gas. Finally, the conclusions regarding the experience of merging CAD and CAE theories to develop the kernel, so as to result in a tool adaptable to various applications of the metalworking industry are presented

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim. Extrinsic compression of the popliteal artery and absence of surrounding anatomical abnormalities characterize the functional popliteal artery entrapment syndrome (PAES). The diagnosis is confirmed to individuals who have typical symptoms of popliteal entrapment and occlusion or important stenosis of the popliteal artery with color duplex sonography (CDS), magnetic resonance imaging (MRI) or arteriography during active plantar flexion-extension maneuvers. However, variable result findings in normal asymptomatic subjects have raised doubts as to the validity of these tests. The purpose of this study was to compare the frequency of popliteal artery compression in 2 groups of asymptomatic subjects, athletes and non-athletes.Methods. Forty-two individuals were studied. Twenty-one subjects were indoor soccer players, and 21 were sedentary individuals. Physical activity was evaluated through questionnaires, anthropometric measurements, and cardiopulmonary exercise test. Evaluation of popliteal artery compression was performed in lower limbs with CDS, ankle-brachial index (ABI) measurements and continuous wave Doppler of the posterior tibial artery.Results. The athletes studied fulfilled the criteria of high level of physical activity whereas sedentary subjects met the criteria of low level of activity. Popliteal artery compression was observed with CDS in 6 (14.2%) studied subjects; 2 of whom (4.7%) were athletes and 4 (9.5%) were non-athletes. This difference was not statistically significant (p=0.21). Doppler of the tibial arteries and ABI measurements gave good specificity and sensibility in the identification of popliteal artery compression.Conclusion. The frequency of popliteal artery compression during maneuvers in normal subjects was 14.2% irrespective of whether or not they performed regular physical activities. Both Doppler and ABI showed good agreement with CDS and should be considered in screening popliteal arteries in individuals suspected of PAES.