920 resultados para measurement techniques of Aerosoles
Resumo:
This article contributes to understanding the conditions of social-ecological change by focusing on the agency of individuals in the pathways to institutionalization. Drawing on the case of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), it addresses institutional entrepreneurship in an emerging environmental science-policy institution (ESPI) at a global scale. Drawing on ethnographic observations, semistructured interviews, and document analysis, we propose a detailed chronology of the genesis of the IPBES before focusing on the final phase of the negotiations toward the creation of the institution. We analyze the techniques and skills deployed by the chairman during the conference to handle the tensions at play both to prevent participants from deserting the negotiations arena and to prevent a lack of inclusiveness from discrediting the future institution. We stress that creating a new global environmental institution requires the situated exercise of an art of “having everybody on board” through techniques of inclusiveness that we characterize. Our results emphazise the major challenge of handling the fragmentation and plasticity of the groups of interest involved in the institutionalization process, thus adding to the theory of transformative agency of institutional entrepreneurs. Although inclusiveness might remain partly unattainable, such techniques of inclusiveness appear to be a major condition of the legitimacy and success of the institutionalization of a new global ESPI. Our results also add to the literature on boundary making within ESPIs by emphasizing the multiplicity and plasticity of the groups actually at stake.
Resumo:
This study aims to identify the materials used in the production of a post-byzantine icon from the Museum of Évora’s collection. The icon, representing the “Emperor Constantine and his mother Helen holding the Holy Cross” was once dated as being from the 10th century. Throughout a multi-analytical approach, combining area exams with spectroscopic techniques, this study tried to confirm its actual chronology. The results obtained revealed that it is most likely an icon from the late 17th or 18th century.
Resumo:
Conceptual design of the integral measurement system of the radiation dose of the fuel elements for the ALFRED reactor.
Resumo:
L'objectif ultime en géomorphologie fluviale est d'expliquer les formes des cours d'eau et leur évolution temporelle et spatiale. La multiplication des études nous a mené à la réalisation que les systèmes géomorphologiques sont complexes. Les formes observées sont plus que la somme des processus individuels qui les régissent en raison d’interactions et de rétroactions non-linéaires à de multiples échelles spatiales et temporelles. Dans ce contexte, le but général de la thèse est de proposer et de tester de nouvelles avenues de recherche afin de mieux appréhender la complexité des dynamiques fluviales en utilisant des approches méthodologiques et analytiques mettant l’accent sur les interactions entre l’écoulement, le transport de sédiments en charge fond et la morphologie du lit en rivière graveleuse. Cette orientation découle du constat que les paradigmes actuels en géomorphologie fluviale n’arrivent pas à expliquer adéquatement la variabilité naturelle du transport en charge de fond ainsi que des formes du lit qui en résultent. Cinq pistes de réflexion sont développées sous forme d’articles basés sur des études de cas : 1. L'intégration des échelles de variation de l'écoulement permet d’insérer la notion de structures turbulentes dans des pulsations de plus grande échelle et d'améliorer la compréhension de la variabilité du transport de sédiments. 2. La quantification des taux de changement de l’écoulement (accélération /décélération) au cours d’une crue permet d’expliquer la variabilité des flux de transport en charge fond autant que la magnitude de l’écoulement. 3. L’utilisation de techniques de mesures complémentaires révèle une nouvelle dynamique du lit des rivières graveleuses, la dilatation et la contraction du lit suite à une crue. 4. La remise en cause du fait généralement accepté que le transport en charge de fond est corrélé positivement à l'intensité des modifications morphologiques en raison d’un problème associé aux échelles différentes des processus en cause. 5. L’approche systémique des dynamiques fluviales par l’utilisation d’analyses multivariées permet d’appréhender la complexité des dynamiques de rétroactions linéaires et non-linéaires dans l’évolution d’un chenal et d’illustrer l’importance de l’historique récent des changements géomorphologiques en réponse aux crues. Cette thèse se veut une avancée conceptuelle issue d'une profonde réflexion sur les approches classiques que l'on utilise en géomorphologie fluviale depuis plusieurs décennies. Elle est basée sur un jeu de données unique récolté lors du suivi intensif de 21 évènements de crue dans un petit cours d’eau à lit de graviers, le ruisseau Béard (Québec). Le protocole expérimental axé sur la simultanéité des mesures de l’écoulement, de la morphologie du lit et du transport de sédiments en charge de fond a permis de centrer la recherche directement sur les interactions entre les processus plutôt que sur les processus individuels, une approche rarement utilisée en géomorphologie fluviale. Chacun des chapitres illustre un nouveau concept ou une nouvelle approche permettant de résoudre certaines des impasses rencontrées actuellement en géomorphologie fluviale. Ces travaux ont des implications importantes pour la compréhension de la dynamique des lits de rivières et des habitats fluviaux et servent de point de départ pour de nouveaux développements.
Resumo:
Um experimento foi realizado com o objetivo de estimar a concentração espermática das espécies dourado (Salminus brasiliensis), curimba (Prochilodus lineatus), jundiá (Rhamdia quelen), cascudo-preto (Rhinelepis aspera) e tilápia-do-nilo (Oreochromis niloticus) pelo método de espermatócrito. Utilizaram-se 19, 58, 51, 43 e 85 reprodutores de dourado, curimba, jundiá, cascudo-preto e tilápia-do-nilo, respectivamente. Com exceção da tilápia-do-nilo, os reprodutores foram submetidos ao processo de indução hormonal e posteriormente submetidos a coleta de sêmen. Foram comparadas as técnicas de mensuração da concentração espermática do sêmen por contagem em câmara hematimétrica de Neubauer e por espermatócrito. Os resultados obtidos foram submetidos à análise de regressão a 5% de probabilidade. As concentrações espermáticas mensuradas por ambas as técnicas apresentaram relação linear, para curimbas, jundiás e tilápias-do-nilo, com equações y = 6,6624 × 10(9) + 3,68553 × 10(8)x; y = 2,153 × 10(9) + 4,426 × 10(8)x e y = -9,0897 × 10(8) + 6,0167 × 10(8), respectivamente. O método de espermatócrito pode ser utilizado para estimar a concentração espermática do sêmen de curimbas, jundiás e tilápias-do-nilo.
Resumo:
PURPOSE: Two noninvasive methods to measure dental implant stability are damping capacity assessment (Periotest) and resonance frequency analysis (Osstell). The objective of the present study was to assess the correlation of these 2 techniques in clinical use. MATERIALS AND METHODS: Implant stability of 213 clinically stable loaded and unloaded 1-stage implants in 65 patients was measured in triplicate by means of resonance frequency analysis and Periotest. Descriptive statistics as well as Pearson's, Spearman's, and intraclass correlation coefficients were calculated with SPSS 11.0.2. RESULTS: The mean values were 57.66 +/- 8.19 implant stability quotient for the resonance frequency analysis and -5.08 +/- 2.02 for the Periotest. The correlation of both measuring techniques was -0.64 (Pearson) and -0.65 (Spearman). The single-measure intraclass correlation coefficients for the ISQ and Periotest values were 0.99 and 0.88, respectively (95% CI). No significant correlation of implant length with either resonance frequency analysis or Periotest could be found. However, a significant correlation of implant diameter with both techniques was found (P < .005). The correlation of both measuring systems is moderate to good. It seems that the Periotest is more susceptible to clinical measurement variables than the Osstell device. The intraclass correlation indicated lower measurement precision for the Periotest technique. Additionally, the Periotest values differed more from the normal (Gaussian) curve of distribution than the ISQs. Both measurement techniques show a significant correlation to the implant diameter. CONCLUSION: Resonance frequency analysis appeared to be the more precise technique.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
The dorsolateral prefrontal cortex (DLPFC) has been implicated in the pathophysiology of mental disorders. Previous region-of-interest MRI studies that attempted to delineate this region adopted various landmarks and measurement techniques, with inconsistent results. We developed a new region-of-interest measurement method to obtain morphometric data of this region from structural MRI scans, taking into account knowledge from cytoarchitectonic postmortem studies and the large inter-individual variability of this region. MRI scans of 10 subjects were obtained, and DLPFC tracing was performed in the coronal plane by two independent raters using the semi-automated software Brains2. The intra-class correlation coefficients between two independent raters were 0.94 for the left DLPFC and 0.93 for the right DLPFC. The mean +/- S.D. DLPFC volumes were 9.23 +/- 2.35 ml for the left hemisphere and 8.20 +/- 2.08 ml for the right hemisphere. Our proposed method has high inter-rater reliability and is easy to implement, permitting the standardized measurement of this region for clinical research applications. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Surface topography and light scattering were measured on 15 samples ranging from those having smooth surfaces to others with ground surfaces. The measurement techniques included an atomic force microscope, mechanical and optical profilers, confocal laser scanning microscope, angle-resolved scattering, and total scattering. The samples included polished and ground fused silica, silicon carbide, sapphire, electroplated gold, and diamond-turned brass. The measurement instruments and techniques had different surface spatial wavelength band limits, so the measured roughnesses were not directly comparable. Two-dimensional power spectral density (PSD) functions were calculated from the digitized measurement data, and we obtained rms roughnesses by integrating areas under the PSD curves between fixed upper and lower band limits. In this way, roughnesses measured with different instruments and techniques could be directly compared. Although smaller differences between measurement techniques remained in the calculated roughnesses, these could be explained mostly by surface topographical features such as isolated particles that affected the instruments in different ways.
Resumo:
The mechanical nature of gastric contraction activity (GCA) plays an important role in gastrointestinal motility. The aim of this study was to detect GCA in anaesthetized dogs, using simultaneously the techniques of AC biosusceptometry (ACB) and manometry, analysing the characteristics of frequency and amplitude (motility index) of GCA, modified by drugs such as prostigmine and N-butyl-scopolamine. The ACB method is based on a differential transformer of magnetic flux and the magnetic tracer works as a changeable external nucleus. This magnetic tracer causes a modification in the magnetic flux, which is detected by the coils. The results obtained from the ACB showed a performance comparable to the manometry in measuring the modifications in the frequency and amplitude of the GCA. We concluded that this ACB technique, non-invasive and free of ionizing radiation, is an option for evaluating GCA and can be employed in future clinical studies.
Resumo:
The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.
Resumo:
Volatile organic compounds play a critical role in ozone formation and drive the chemistry of the atmosphere, together with OH radicals. The simplest volatile organic compound methane is a climatologically important greenhouse gas, and plays a key role in regulating water vapour in the stratosphere and hydroxyl radicals in the troposphere. The OH radical is the most important atmospheric oxidant and knowledge of the atmospheric OH sink, together with the OH source and ambient OH concentrations is essential for understanding the oxidative capacity of the atmosphere. Oceanic emission and / or uptake of methanol, acetone, acetaldehyde, isoprene and dimethyl sulphide (DMS) was characterized as a function of photosynthetically active radiation (PAR) and a suite of biological parameters, in a mesocosm experiment conducted in the Norwegian fjord. High frequency (ca. 1 minute-1) methane measurements were performed using a gas chromatograph - flame ionization detector (GC-FID) in the boreal forests of Finland and the tropical forests of Suriname. A new on-line method (Comparative Reactivity Method - CRM) was developed to directly measure the total OH reactivity (sink) of ambient air. It was observed that under conditions of high biological activity and a PAR of ~ 450 μmol photons m-2 s-1, the ocean acted as a net source of acetone. However, if either of these criteria was not fulfilled then the ocean acted as a net sink of acetone. This new insight into the biogeochemical cycling of acetone at the ocean-air interface has helped to resolve discrepancies from earlier works such as Jacob et al. (2002) who reported the ocean to be a net acetone source (27 Tg yr-1) and Marandino et al. (2005) who reported the ocean to be a net sink of acetone (- 48 Tg yr-1). The ocean acted as net source of isoprene, DMS and acetaldehyde but net sink of methanol. Based on these findings, it is recommended that compound specific PAR and biological dependency be used for estimating the influence of the global ocean on atmospheric VOC budgets. Methane was observed to accumulate within the nocturnal boundary layer, clearly indicating emissions from the forest ecosystems. There was a remarkable similarity in the time series of the boreal and tropical forest ecosystem. The average of the median mixing ratios during a typical diel cycle were 1.83 μmol mol-1 and 1.74 μmol mol-1 for the boreal forest ecosystem and tropical forest ecosystem respectively. A flux value of (3.62 ± 0.87) x 1011 molecules cm-2 s-1 (or 45.5 ± 11 Tg CH4 yr-1 for global boreal forest area) was derived, which highlights the importance of the boreal forest ecosystem for the global budget of methane (~ 600 Tg yr-1). The newly developed CRM technique has a dynamic range of ~ 4 s-1 to 300 s-1 and accuracy of ± 25 %. The system has been tested and calibrated with several single and mixed hydrocarbon standards showing excellent linearity and accountability with the reactivity of the standards. Field tests at an urban and forest site illustrate the promise of the new method. The results from this study have improved current understanding about VOC emissions and uptake from ocean and forest ecosystems. Moreover, a new technique for directly measuring the total OH reactivity of ambient air has been developed and validated, which will be a valuable addition to the existing suite of atmospheric measurement techniques.
Resumo:
This thesis work encloses activities carried out in the Laser Center of the Polytechnic University of Madrid and the laboratories of the University of Bologna in Forlì. This thesis focuses on the superficial mechanical treatment for metallic materials called Laser Shock Peening (LSP). This process is a surface enhancement treatment which induces a significant layer of beneficial compressive residual stresses underneath the surface of metal components in order to improve the detrimental effects of the crack growth behavior rate in it. The innovation aspect of this work is the LSP application to specimens with extremely low thickness. In particular, after a bibliographic study and comparison with the main treatments used for the same purposes, this work analyzes the physics of the operation of a laser, its interaction with the surface of the material and the generation of the surface residual stresses which are fundamentals to obtain the LSP benefits. In particular this thesis work regards the application of this treatment to some Al2024-T351 specimens with low thickness. Among the improvements that can be obtained performing this operation, the most important in the aeronautic field is the fatigue life improvement of the treated components. As demonstrated in this work, a well-done LSP treatment can slow down the progress of the defects in the material that could lead to sudden failure of the structure. A part of this thesis is the simulation of this phenomenon using the program AFGROW, with which have been analyzed different geometric configurations of the treatment, verifying which was better for large panels of typical aeronautical interest. The core of the LSP process are the residual stresses that are induced on the material by the interaction with the laser light, these can be simulated with the finite elements but it is essential to verify and measure them experimentally. In the thesis are introduced the main methods for the detection of those stresses, they can be mechanical or by diffraction. In particular, will be described the principles and the detailed realization method of the Hole Drilling measure and an introduction of the X-ray Diffraction; then will be presented the results I obtained with both techniques. In addition to these two measurement techniques will also be introduced Neutron Diffraction method. The last part refers to the experimental tests of the fatigue life of the specimens, with a detailed description of the apparatus and the procedure used from the initial specimen preparation to the fatigue test with the press. Then the obtained results are exposed and discussed.
Resumo:
There are two main types of bone in the human body, trabecular and cortical bone. Cortical bone is primarily found on the outer surface of most bones in the body while trabecular bone is found in vertebrae and at the end of long bones (Ross 2007). Osteoporosis is a condition that compromises the structural integrity of trabecular bone, greatly reducing the ability of the bone to absorb energy from falls. The current method for diagnosing osteoporosis and predicting fracture risk is measurement of bone mineral density. Limitations of this method include dependence on the bone density measurement device and dependence on type of test and measurement location (Rubin 2005). Each year there are approximately 250,000 hip fractures in the United States due to osteoporosis (Kleerekoper 2006). Currently, the most common method for repairing a hip fracture is a hip fixation surgery. During surgery, a temporary guide wire is inserted to guide the permanent screw into place and then removed. It is believed that directly measuring this screw pullout force may result in a better assessment of bone quality than current indirect measurement techniques (T. Bowen 2008-2010, pers. comm.). The objective of this project is to design a device that can measure the force required to extract this guide wire. It is believed that this would give the surgeon a direct, quantitative measurement of bone quality at the site of the fixation. A first generation device was designed by a Bucknell Biomedical Engineering Senior Design team during the 2008- 2009 Academic Year. The first step of this project was to examine the device, conduct a thorough design analysis, and brainstorm new concepts. The concept selected uses a translational screw to extract the guide wire. The device was fabricated and underwent validation testing to ensure that the device was functional and met the required engineering specifications. Two tests were conducted, one to test the functionality of the device by testing if the device gave repeatable results, and the other to test the sensitivity of the device to misalignment. Guide wires were extracted from 3 materials, low density polyethylene, ultra high molecular weight polyethylene, and polypropylene and the force of extraction was measured. During testing, it was discovered that the spring in the device did not have a high enough spring constant to reach the high forces necessary for extracting the wires without excessive deflection of the spring. The test procedure was modified slightly so the wires were not fully threaded into the material. The testing results indicate that there is significant variation in the screw pullout force, up to 30% of the average value. This significant variation was attributed to problems in the testing and data collection, and a revised set of tests was proposed to better evaluate the performance of the device. The fabricated device is a fully-functioning prototype and further refinements and testing of the device may lead to a 3rd generation version capable of measuring the screw pullout force during hip fixation surgery.