926 resultados para testing method
Resumo:
Harmful algal blooms (HABs) are a natural global phenomena emerging in severity and extent. Incidents have many economic, ecological and human health impacts. Monitoring and providing early warning of toxic HABs are critical for protecting public health. Current monitoring programmes include measuring the number of toxic phytoplankton cells in the water and biotoxin levels in shellfish tissue. As these efforts are demanding and labour intensive, methods which improve the efficiency are essential. This study compares the utilisation of a multitoxin surface plasmon resonance (multitoxin SPR) biosensor with enzyme-linked immunosorbent assay (ELISA) and analytical methods such as high performance liquid chromatography with fluorescence detection (HPLC-FLD) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) for toxic HAB monitoring efforts in Europe. Seawater samples (n = 256) from European waters, collected 2009-2011, were analysed for biotoxins: saxitoxin and analogues, okadaic acid and dinophysistoxins 1/2 (DTX1/DTX2) and domoic acid responsible for paralytic shellfish poisoning (PSP), diarrheic shellfish poisoning (DSP) and amnesic shellfish poisoning (ASP), respectively. Biotoxins were detected mainly in samples from Spain and Ireland. France and Norway appeared to have the lowest number of toxic samples. Both the multitoxin SPR biosensor and the RNA microarray were more sensitive at detecting toxic HABs than standard light microscopy phytoplankton monitoring. Correlations between each of the detection methods were performed with the overall agreement, based on statistical 2 × 2 comparison tables, between each testing platform ranging between 32% and 74% for all three toxin families illustrating that one individual testing method may not be an ideal solution. An efficient early warning monitoring system for the detection of toxic HABs could therefore be achieved by combining both the multitoxin SPR biosensor and RNA microarray.
Resumo:
GOAL: The manufacturing and distribution of strips of instant thin - layer chromatography with silica gel (ITLC - SG) (reference method) is currently discontinued so there is a need for an alternative method f or the determination of radiochemical purity (RCP) of 99m Tc - tetrofosmin. This study aims to compare five alternative methods proposed by the producer to determine the RCP of 99m Tc - tetrofosmin. METHODS: Nineteen vials of tetrofosmin were radiolabelled with 99m Tc and the percentages of the RCP were determined. Five different methods were compared with the standard RCP testing method (ITLC - SG, 2x20 cm): Whatman 3MM (1x10 cm) with acetone and dichloro - methane (method 1); Whatman 3MM (1x1 0 cm) with ethyl acetate (method 2); aluminum oxide - coated plastic thin - layer chromatography (TLC) plate (1x10 cm) and ethanol (method 3); Whatman 3MM (2x20 cm) with acetone and dichloro - methane (method 4); solid - phase extraction method C18 cartridge (meth od 5). RESULTS: The average values of RCP were 95,30% ± 1,28% (method 1), 93,95 ± 0,61% (method 2), 96,85% ± 0,93% (method 3), 92,94% ± 0,99% (method 4) and 96,25% ± 2,57% (method 5) (n=12 each), and 93,15% ± 1,13% for the standard method (n=19). There we re statistical significant differences in the values obtained for methods 1 (P=0,001), 3 (P=0,000) and 5 (P=0,004), and there were no statistical significant differences in the values obtained for methods 2 (P=0,113) and 4 (P=0,327). CONCLUSION: From the results obtained, methods 2 and 4 showed a higher correlation with the standard method. Unlike method 4, method 2 is less time - consuming than the reference method and can overcome the problems associated with the solvent toxicity. The remaining methods (1, 3 and 5) tended to overestimate RCP value compared to the standard method.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
The current set of studies was conducted to examine the cross-race effect (CRE), a phenomenon commonly found in the face perception literature. The CRE is evident when participants display better own-race face recognition accuracy than other-race recognition accuracy (e.g. Ackerman et al., 2006). Typically the cross-race effect is attributed to perceptual expertise, (i.e., other-race faces are processed less holistically; Michel, Rossion, Han, Chung & Caldara, 2006), and the social cognitive model (i.e., other-race faces are processed at the categorical level by virtue of being an out-group member; Hugenberg, Young, Bernstein, & Sacco, 2010). These effects may be mediated by differential attention. I investigated whether other-race faces are disregarded and, consequently, not remembered as accurately as own-race (in-group) faces. In Experiment 1, I examined how the magnitude of the CRE differed when participants learned individual faces sequentially versus when they learned multiple faces simultaneously in arrays comprising faces and objects. I also examined how the CRE differed when participants recognized individual faces presented sequentially versus in arrays of eight faces. Participants’ recognition accuracy was better for own-race faces than other-race faces regardless of familiarization method. However, the difference between own- and other-race accuracy was larger when faces were familiarized sequentially in comparison to familiarization with arrays. Participants’ response patterns during testing differed depending on the combination of familiarization and testing method. Participants had more false alarms for other-race faces than own-race faces if they learned faces sequentially (regardless of testing strategy); if participants learned faces in arrays, they had more false alarms for other-race faces than own-races faces if ii i they were tested with sequentially presented faces. These results are consistent with the perceptual expertise model in that participants were better able to use the full two seconds in the sequential task for own-race faces, but not for other-race faces. The purpose of Experiment 2 was to examine participants’ attentional allocation in complex scenes. Participants were shown scenes comprising people in real places, but the head stimuli used in Experiment 1 were superimposed onto the bodies in each scene. Using a Tobii eyetracker, participants’ looking time for both own- and other-race faces was evaluated to determine whether participants looked longer at own-race faces and whether individual differences in looking time correlated with individual differences in recognition accuracy. The results of this experiment demonstrated that although own-race faces were preferentially attended to in comparison to other-race faces, individual differences in looking time biases towards own-race faces did not correlate with individual differences in own-race recognition advantages. These results are also consistent with perceptual expertise, as it seems that the role of attentional biases towards own-race faces is independent of the cognitive processing that occurs for own-race faces. All together, these results have implications for face perception tasks that are performed in the lab, how accurate people may be when remembering faces in the real world, and the accuracy and patterns of errors in eyewitness testimony.
Resumo:
The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.
Resumo:
This work studied the immiscible blend of elastomeric poly(methyl methacrylate) (PMMA) with poly(ethylene terephthalate) (PET) bottle grade with and without the use of compatibilizer agent, poly(methyl methacrylate-co-glycidyl methacrylate - co-ethyl acrylate) (MGE). The characterizations of torque rheometry, melt flow index measurement (MFI), measuring the density and the degree of cristallinity by pycnometry, tensile testing, method of work essential fracture (EWF), scanning electron microscopy (SEM) and transmission electron microscopy (TEM) were performed in pure polymer and blends PMMA/PET. The rheological results showed evidence of signs of chemical reaction between the epoxy group MGE with the end groups of the PET chains and also to the elastomeric phase of PMMA. The increase in the concentration of PET reduced torque and adding MGE increased the torque of the blend of PMMA/PET. The results of the MFI also show that elastomeric PMMA showed lower flow and thus higher viscosity than PET. In the results of picnometry observed that increasing the percentage of PET resulted in an increase in density and degree crystallinity of the blends PMMA/PET. The tensile test showed that increasing the percentage of PET resulted in an increase in ultimate strength and elastic modulus and decrease in elongation at break. However, in the phase inversion, where the blend showed evidence of a co-continuous morphology and also, with 30% PET dispersed phase and compatibilized with 5% MGE, there were significant results elongation at break compared to elastomeric PMMA. The applicability of the method of essential work of fracture was shown to be possible for most formulations. And it was observed that with increasing elastomeric PMMA in the formulations of the blends there was an improvement in specific amounts of essential work of fracture (We) and a decrease in the values of specific non-essential work of fracture (βWp)
Resumo:
TEMA: programa de treinamento auditivo em escolares com distúrbio de aprendizagem. OBJETIVOS: verificar a eficácia de um programa de treinamento auditivo em escolares com distúrbio de aprendizagem e comparar os achados dos procedimentos de avaliação utilizados nas pré e pós-testagem em escolares com distúrbio de aprendizagem e sem dificuldades de aprendizagem, submetidos e não submetidos ao programa de treinamento auditivo. MÉTODO: participaram deste estudo 40 escolares, sendo que esses foram divididos em: GI, subdividido em: GIe (10 escolares com distúrbio de aprendizagem submetidos ao programa de treinamento auditivo), GIc (10 escolares com distúrbio de aprendizagem não submetidos ao programa de treinamento auditivo) e GII, subdividido em: GIIe (10 escolares sem dificuldades de aprendizagem submetidos ao programa de treinamento auditivo) e GIIc (10 escolares sem dificuldades de aprendizagem não submetidos ao programa de treinamento auditivo). Foi realizado o programa de Treinamento Auditivo Audio Training®. RESULTADOS: os resultados mostraram que o GI apresentou desempenho inferior ao de GII em atividades relacionadas com as habilidades auditivas e de consciência fonológica. O GIe e o GIIe apresentaram melhor desempenho em habilidades auditivas e de consciência fonológica depois da aplicação do programa de treinamento auditivo, quando comparados os achados de pré e pós-testagem. CONCLUSÃO: o desempenho de escolares com distúrbio de aprendizagem nas tarefas auditivas e fonológicas apresenta-se inferior no que concerne ao de escolares sem distúrbio de aprendizagem. A utilização do programa de treinamento auditivo mostrou-se eficaz e possibilitou aos escolares o desenvolvimento dessas habilidades.
Resumo:
The need for a rational use of water and supply of food for a growing world population have led to the development of research in the area of irrigation systems. Thus, some irrigation systems which join efficiency with low cost of material have been developed. Although some technical characteristics are provided by the manufacturers, tests are required to verify functioning of the system and uniformity of water distribution. Continuous research on uniformity, characteristics of the materials and design of water distribution systems is essential for system improvement. Therefore, the objective of this work was to evaluate the CV (manufacturer's coefficient of variation) of Amanco microsprinkler (1.0 mm light green nipple) using bench testing in the laboratory of Irrigation at UNESP - FCA campus of Botucatu-SP. Twenty-five microsprinklers in a sequential design were used in the tests. Three flow systems were tested as follows: a Coil system based on serial connected pipes; a Lateral system, the most common system in which secondary lines are fed by a main line; and a Mesh system used in the urban water supply. The results showed that 4.17% CVf met the production standards and the Lateral and Mesh systems were similar regarding outflow using bench testing. The Mesh system presented the highest mean value of outflow and the lowest range of variation.
Resumo:
A área de pesquisa em patologia das construções vem crescendo muito ultimamente, devido à degradação natural observada nos mais diversos tipos de edificações. Neste sentido, grande atenção vem sendo dispendida às estruturas de concreto de obras especiais como usinas hidrelétricas (UHEs) em virtude de sua complexidade e importância, tanto social quanto econômica. Uma das patologias que mais ocorrem nestas estruturas é a abrasão hidráulica do concreto, a qual pode levar a construção à ruína, em casos extremos. Este trabalho visa obter e analisar dados de vários materiais de reparo quanto à resistência à abrasão hidráulica e quanto aos seus respectivos sistemas de aderência. Dividiu-se a pesquisa em três grandes etapas: na primeira verificaria as características físicas e mecânicas dos materiais de reparo, a segunda analisaria a compatibilidade entre reparo e substrato através da aderência obtida no ensaio de compressão na junta diagonal e a terceira forneceria dados sobre a resistência à abrasão dos reparos através do ensaio ASTM C1138. Na primeira etapa foram realizados os ensaios de resistência à compressão axial e consistência dos concretos e argamassas utilizados como reparos profundos e superficiais para as idades de 3, 7 e 28 dias; Na segunda, aos 3 e 28 dias de idade, foram realizados os ensaios de aderência dos sistemas adesivos, abrangendo materiais cimentícios e à base de polímeros; Na última etapa foram utilizados os mesmos materiais de reparo da primeira: argamassas e concretos à base de cimento com e sem adição de pozolanas sílica ativa e metacaulim e argamassa à base de resina epóxi aos 3 e 28 dias. Como resultados, foram obtidas resistências à compressão axial entre 40 e 65 MPa para os materiais cimentícios aos 3 dias de idade e entre 60 e 80 MPa aos 28 dias, enquanto que para a argamassa epóxi a resistência foi de 20 MPa para ambas as idades. A consistência das argamassas foi tixotrópica, enquanto que a dos concretos foi bastante fluida. Quanto à aderência, realizou-se a aplicação dos adesivos em superfícies escarificadas, limpas e encharcadas, o que possibilitou uma expressiva vantagem dos adesivos à base de cimento e relação aos poliméricos, mesmo estes sendo indicados para colagem em substratos úmidos. Na etapa de abrasão dos reparos, utilizou-se uma nova metodologia de preparo dos substratos de concreto e posterior aplicação dos reparos, classificados em profundos ou superficiais. O reparo que apresentou maior resistência à abrasão foi o de argamassa epóxi. Não houve diferença estatística significativa entre os concretos sem adição e com adição de sílica ativa e metacaulim de alta reatividade. Em geral, o desgaste das argamassas, especialmente aos 3 dias, foi maior que o dos concretos, onde se verificou claramente a presença de dois estágios de taxa de desgaste em função da resistência à abrasão dos agregados graúdos. Assim, foi possível identificar diferentes estágios de desgaste para os concretos utilizados.
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
Background. This study validated the content of an instrument designed to assess the performance of the medicolegal death investigation system. The instrument was modified from Version 2.0 of the Local Public Health System Performance Assessment Instrument (CDC) and is based on the 10 Essential Public Health Services. ^ Aims. The aims were to employ a cognitive testing process to interview a randomized sample of medicolegal death investigation office leaders, qualitatively describe the results, and revise the instrument accordingly. ^ Methods. A cognitive testing process was used to validate the survey instrument's content in terms of the how well participants could respond to and interpret the questions. Twelve randomly selected medicolegal death investigation chiefs (or equivalent) that represented the seven types of medicolegal death investigation systems and six different state mandates were interviewed by telephone. The respondents also were representative of the educational diversity within medicolegal death investigation leadership. Based on respondent comments, themes were identified that permitted improvement of the instrument toward collecting valid and reliable information when ultimately used in a field survey format. ^ Results. Responses were coded and classified, which permitted the identification of themes related to Comprehension/Interpretation, Retrieval, Estimate/Judgment, and Response. The majority of respondent comments related to Comprehension/Interpretation of the questions. Respondents identified 67 questions and 6 section explanations that merited rephrasing, adding, or deleting examples or words. In addition, five questions were added based on respondent comments. ^ Conclusion. The content of the instrument was validated by cognitive testing method design. The respondents agreed that the instrument would be a useful and relevant tool for assessing system performance. ^
Resumo:
Most studies of differential gene-expressions have been conducted between two given conditions. The two-condition experimental (TCE) approach is simple in that all genes detected display a common differential expression pattern responsive to a common two-condition difference. Therefore, the genes that are differentially expressed under the other conditions other than the given two conditions are undetectable with the TCE approach. In order to address the problem, we propose a new approach called multiple-condition experiment (MCE) without replication and develop corresponding statistical methods including inference of pairs of conditions for genes, new t-statistics, and a generalized multiple-testing method for any multiple-testing procedure via a control parameter C. We applied these statistical methods to analyze our real MCE data from breast cancer cell lines and found that 85 percent of gene-expression variations were caused by genotypic effects and genotype-ANAX1 overexpression interactions, which agrees well with our expected results. We also applied our methods to the adenoma dataset of Notterman et al. and identified 93 differentially expressed genes that could not be found in TCE. The MCE approach is a conceptual breakthrough in many aspects: (a) many conditions of interests can be conducted simultaneously; (b) study of association between differential expressions of genes and conditions becomes easy; (c) it can provide more precise information for molecular classification and diagnosis of tumors; (d) it can save lot of experimental resources and time for investigators.^
Resumo:
Photovoltaic modules based on thin film technology are gaining importance in the photovoltaic market, and module installers and plant owners have increasingly begun to request methods of performing module quality control. These modules pose additional problems for measuring power under standard test conditions (STC), beyond problems caused by the temperature of the module and the ambient variables. The main difficulty is that the modules’ power rates may vary depending both on the amount of time they have been exposed to the sun during recent hours and on their history of sunlight exposure. In order to assess the current state of the module, it is necessary to know its sunlight exposure history. Thus, an easily accomplishable testing method that ensures the repeatability of the measurements of the power generated is needed. This paper examines different tests performed on commercial thin film PV modules of CIS, a-Si and CdTe technologies in order to find the best way to obtain measurements. A method for obtaining indoor measurements of these technologies that takes into account periods of sunlight exposure is proposed. Special attention is paid to CdTe as a fast growing technology in the market.
Resumo:
El acero es, junto con el hormigón, el material más ampliamente empleado en la construcción de obra civil y de edificación. Además de su elevada resistencia, su carácter dúctil resulta un aspecto de particular interés desde el punto de vista de la seguridad estructural, ya que permite redistribuir esfuerzos a elementos adyacentes y, por tanto, almacenar una mayor energía antes del colapso final de la estructura. No obstante, a pesar de su extendida utilización, todavía existen aspectos relacionados con su comportamiento en rotura que necesitan una mayor clarificación y que permitirían un mejor aprovechamiento de sus propiedades. Cuando un elemento de acero es ensayado a tracción y alcanza la carga máxima, sufre la aparición de un cuello de estricción que plantea dificultades para conocer el comportamiento del material desde dicho instante hasta la rotura. La norma ISO 6892-1, que define el método a emplear en un ensayo de tracción con materiales metálicos, establece procedimientos para determinar los parámetros relacionados con este tramo último de la curva F − E. No obstante, la definición de dichos parámetros resulta controvertida, ya que éstos presentan una baja reproducibilidad y una baja repetibilidad que resultan difíciles de explicar. En esta Tesis se busca profundizar en el conocimiento del último tramo de la curva F − E de los aceros de construcción. Para ello se ha realizado una amplia campaña experimental sobre dos aceros representativos en el campo de la construcción civil: el alambrón de partida empleado en la fabricación de alambres de pretensado y un acero empleado como refuerzo en hormigón armado. Los dos materiales analizados presentan formas de rotura diferentes: mientras el primero de ellos presenta una superficie de rotura plana con una región oscura claramente apreciable en su interior, el segundo rompe según la clásica superficie en forma de copa y cono. La rotura en forma de copa y cono ha sido ampliamente estudiada en el pasado y existen modelos de rotura que han logrado reproducirla con éxito, en especial el modelo de Gurson- Tvergaard-Needleman (GTN). En cuanto a la rotura exhibida por el primer material, en principio nada impide abordar su reproducción numérica con un modelo GTN, sin embargo, las diferencias observadas entre ambos materiales en los ensayos experimentales permiten pensar en otro criterio de rotura. En la presente Tesis se realiza una amplia campaña experimental con probetas cilíndricas fabricadas con dos aceros representativos de los empleados en construcción con comportamientos en rotura diferentes. Por un lado se analiza el alambrón de partida empleado en la fabricación de alambres de pretensado, cuyo frente de rotura es plano y perpendicular a la dirección de aplicación de la carga con una región oscura en su interior. Por otro lado, se estudian barras de acero empleadas como armadura pasiva tipo B 500 SD, cuyo frente de rotura presenta la clásica superficie en forma de copa y cono. Estos trabajos experimentales han permitido distinguir dos comportamientos en rotura claramente diferenciados entre ambos materiales y, en el caso del primer material, se ha identificado un comportamiento asemejable al exhibido por materiales frágiles. En este trabajo se plantea la hipótesis de que el primer material, cuya rotura provoca un frente de rotura plano y perpendicular a la dirección de aplicación de la carga, rompe de manera cuasifrágil como consecuencia de un proceso de decohesión, de manera que la región oscura que se observa en el centro del frente de rotura se asemeja a una entalla circular perpendicular a la dirección de aplicación de la carga. Para la reproducción numérica de la rotura exhibida por el primer material, se plantea un criterio de rotura basado en un modelo cohesivo que, como aspecto novedoso, se hace depender de la triaxialidad de tensiones, parámetro determinante en el fallo de este tipo de materiales. Este tipo de modelos presenta varias ventajas respecto a los modelos GTN habitualmente empleados. Mientras los modelos GTN precisan de numerosos parámetros para su calibración, los modelos cohesivos precisan fundamentalmente de dos parámetros para definir su curva de ablandamiento: la tensión de decohesión ft y la energía de fractura GF . Además, los parámetros de los modelos GTN no son medibles de manera experimental, mientras que GF sí lo es. En cuanto a ft, aunque no existe un método para su determinación experimental, sí resulta un parámetro más fácilmente interpretable que los empleados por los modelos GTN, que utilizan valores como el porcentaje de huecos presentes en el material para iniciar el fenómeno de coalescencia o el porcentaje de poros que provoca una pérdida total de la capacidad resistente. Para implementar este criterio de rotura se ha desarrollado un elemento de intercara cohesivo dependiente de la triaxialidad de tensiones. Se han reproducido con éxito los ensayos de tracción llevados a cabo en la campaña experimental empleando dicho elemento de intercara. Además, en estos modelos la rotura se produce fenomenológicamente de la misma manera observada en los ensayos experimentales: produciéndose una decohesión circular en torno al eje de la probeta. En definitiva, los trabajos desarrollados en esta Tesis, tanto experimentales como numéricos, contribuyen a clarificar el comportamiento de los aceros de construcción en el último tramo de la curva F − E y los mecanismos desencadenantes de la rotura final del material, aspecto que puede contribuir a un mejor aprovechamiento de las propiedades de estos aceros en el futuro y a mejorar la seguridad de las estructuras construidas con ellos. Steel is, together with concrete, the most widely used material in civil engineering works. Not only its high strength, but also its ductility is of special interest from the point of view of the structural safety, since it enables stress distribution with adjacent elements and, therefore, more energy can be stored before reaching the structural failure. However, despite of being extensively used, there are still some aspects related to its fracture behaviour that need to be clarified and that will allow for a better use of its properties. When a steel item is tested under tension and reaches the maximum load point, necking process begins, which makes difficult to define the material behaviour from that moment onward. The ISO standard 6892-1, which defines the tensile testing method for metallic materials, describes the procedures to obtain some parameters related to this last section of the F − E curve. Nevertheless, these parameters have proved to be controversial, since they have low reproducibility and repeatibility rates that are difficult to explain. This Thesis tries to deepen the knowledge of the last section of the F − E curve for construction steels. An extensive experimental campaign has been carried out with two representative steels used in civil engineering works: a steel rod used for manufacturing prestressing steel wires, before the cold-drawing process is applied, and steel bars used in reinforced concrete structures. Both materials have different fracture surfaces: while the first of them shows a flat fracture surface, perpendicular to the loading direction with a dark region in the centre of it, the second one shows the classical cup-cone fracture surface. The cup-cone fracture surface has been deeply studied in the past and different numerical models have been able to reproduce it with success, with a special mention to the Gurson-Tvergaard-Needleman model (GTN). Regarding the failure surface shown by the first material, in principle it can be numerically reproduced by a GTN model, but the differences observed between both materials in the experimental campaign suggest thinking of a different failure criterium. In the present Thesis, an extensive experimental campaign has been carried out using cylindrical specimens made of two representative construction steels with different fracture behaviours. On one hand, the initial eutectoid steel rod used for manufacturing prestressing steel wires is analysed, which presents a flat fracture surface, perpendicular to the loading direction, and with a dark region in the centre of it. On the other hand, B 500 SD steel bars, typically used in reinforced concrete structures and with the typical cup-cone fracture surface, are studied. These experimental works have allowed distinguishing two clearly different fracture behaviours between both materials and, in the case of the first one, a fragile-like behaviour has been identified. For the first material, which shows a flat fracture surface perpendicular to the loading direction, the following hypothesis is proposed in this study: a quasi-brittle fracture is developed as a consequence of a decohesion process, with the dark region acting as a circular crack perpendicular to the loading direction. To reproduce numerically the fracture behaviour shown by the first material, a failure criterium based on a cohesive model is proposed in this Thesis. As an innovative contribution, this failure criterium depends on the stress triaxiality state of the material, which is a key parameter when studying fracture in this kind of materials. This type of models have some advantages when compared to the widely used GTN models. While GTN models need a high number of parameters to be defined, cohesive models need basically two parameters to define the softening curve: the decohesion stress ft and the fracture energy GF . In addition to this, GTN models parameters cannot be measured experimentally, while GF is indeed. Regarding ft, although no experimental procedure is defined for its obtention, it has an easier interpretation than the parameters used by the GTN models like, for instance, the void volume needed for the coalescence process to start or the void volume that leads to a total loss of the bearing capacity. In order to implement this failure criterium, a triaxiality-dependent cohesive interface element has been developed. The experimental results obtained in the experimental campaign have been successfully reproduced by using this interface element. Furthermore, in these models the failure mechanism is developed in the same way as observed experimentally: with a circular decohesive process taking place around the longitudinal axis of the specimen. In summary, the works developed in this Thesis, both experimental and numerical, contribute to clarify the behaviour of construction steels in the last section of the F − E curve and the mechanisms responsible for the eventual material failure, an aspect that can lead to a better use of the properties of these steels in the future and a safety improvement in the structures built with them.
Resumo:
La artroplastia de cadera se considera uno de los mayores avances quirúrgicos de la Medicina. La aplicación de esta técnica de Traumatología se ha incrementado notablemente en los últimos anos, a causa principalmente del progresivo incremento de la esperanza de vida. En efecto, con la edad aumentan los problemas de artrosis y osteoporosis, enfermedades típicas de las articulaciones y de los huesos que requieren en muchos casos la sustitución protésica total o parcial de la articulación. El buen comportamiento funcional de una prótesis depende en gran medida de la estabilidad primaria, es decir, el correcto anclaje de la prótesis en el momento de su implantación. Las prótesis no cementadas basan su éxito a largo plazo en la osteointegración que tiene lugar entre el material protésico y el tejido óseo, y para lograrla es imprescindible conseguir unas buenas condiciones de estabilidad primaria. El aflojamiento aséptico es la principal causa de fallo de artroplastia total de cadera. Este es un fenómeno en el que, debido a complejas interacciones de factores mecánicos y biológicos, se producen movimientos relativos que comprometen la funcionalidad del implante. La minimización de los correspondientes danos depende en gran medida de la detección precoz del aflojamiento. Para lograr la detección temprana del aflojamiento aséptico del vástago femoral se han ensayado diferentes técnicas, tanto in vivo como in vitro: análisis numéricos y técnicas experimentales basadas en sensores de movimientos provocados por cargas transmitidas natural o artificialmente, tales como impactos o vibraciones de distintas frecuencias. Los montajes y procedimientos aplicados son heterogéneos y, en muchas ocasiones, complejos y costosos, no existiendo acuerdo sobre una técnica simple y eficaz de aplicación general. Asimismo, en la normativa vigente que regula las condiciones que debe cumplir una prótesis previamente a su comercialización, no hay ningún apartado referido específicamente a la evaluación de la bondad del diseño del vástago femoral con respecto a la estabilidad primaria. El objetivo de esta tesis es desarrollar una metodología para el análisis, in vitro, de la estabilidad de un vástago femoral implantado, a fin de poder evaluar las técnicas de implantación y los diferentes diseños de prótesis previamente a su oferta en el mercado. Además se plantea como requisito fundamental que el método desarrollado sea sencillo, reversible, repetible, no destructivo, con control riguroso de parámetros (condiciones de contorno de cargas y desplazamientos) y con un sistema de registro e interpretación de resultados rápido, fiable y asequible. Como paso previo, se ha realizado un análisis cualitativo del problema de contacto en la interfaz hueso-vástago aplicando una técnica optomecánica del campo continuo (fotoelasticidad). Para ello se han fabricado tres modelos en 2D del conjunto hueso-vástago, simulando tres tipos de contactos en la interfaz: contacto sin adherencia y con holgura, contacto sin adherencia y sin holgura, y contacto con adherencia y homogéneo. Aplicando la misma carga a cada modelo, y empleando la técnica de congelación de tensiones, se han visualizado los correspondientes estados tensionales, siendo estos más severos en el modelo de unión sin adherencia, como cabía esperar. En todo caso, los resultados son ilustrativos de la complejidad del problema de contacto y confirman la conveniencia y necesidad de la vía experimental para el estudio del problema. Seguidamente se ha planteado un ensayo dinámico de oscilaciones libres con instrumentación de sensores resistivos tipo galga extensométrica. Las muestras de ensayo han sido huesos fémur en todas sus posibles variantes: modelos simplificados, hueso sintético normalizado y hueso de cadáver, seco y fresco. Se ha diseñado un sistema de empotramiento del extremo distal de la muestra (fémur) con control riguroso de las condiciones de anclaje. La oscilación libre de la muestra se ha obtenido mediante la liberación instantánea de una carga estética determinada y aplicada previamente, bien con una maquina de ensayo o bien por gravedad. Cada muestra se ha instrumentado con galgas extensométricas convencionales cuya señal se ha registrado con un equipo dinámico comercial. Se ha aplicado un procedimiento de tratamiento de señal para acotar, filtrar y presentar las respuestas de los sensores en el dominio del tiempo y de la frecuencia. La interpretación de resultados es de tipo comparativo: se aplica el ensayo a una muestra de fémur Intacto que se toma de referencia, y a continuación se repite el ensayo sobre la misma muestra con una prótesis implantada; la comparación de resultados permite establecer conclusiones inmediatas sobre los efectos de la implantación de la prótesis. La implantación ha sido realizada por un cirujano traumatólogo utilizando las mismas técnicas e instrumental empleadas en el quirófano durante la práctica clínica real, y se ha trabajado con tres vástagos femorales comerciales. Con los resultados en el dominio del tiempo y de la frecuencia de las distintas aplicaciones se han establecido conclusiones sobre los siguientes aspectos: Viabilidad de los distintos tipos de muestras sintéticas: modelos simplificados y fémur sintético normalizado. Repetibilidad, linealidad y reversibilidad del ensayo. Congruencia de resultados con los valores teóricos deducidos de la teoría de oscilaciones libres de barras. Efectos de la implantación de tallos femorales en la amplitud de las oscilaciones, amortiguamiento y frecuencias de oscilación. Detección de armónicos asociados a la micromovilidad. La metodología se ha demostrado apta para ser incorporada a la normativa de prótesis, es de aplicación universal y abre vías para el análisis de la detección y caracterización de la micromovilidad de una prótesis frente a las cargas de servicio. ABSTRACT Total hip arthroplasty is considered as one of the greatest surgical advances in medicine. The application of this technique on Traumatology has increased significantly in recent years, mainly due to the progressive increase in life expectancy. In fact, advanced age increases osteoarthritis and osteoporosis problems, which are typical diseases of joints and bones, and in many cases require full or partial prosthetic replacement on the joint. Right functional behavior of prosthesis is highly dependent on the primary stability; this means it depends on the correct anchoring of the prosthesis at the time of implantation. Uncemented prosthesis base their long-term success on the quality of osseointegration that takes place between the prosthetic material and bone tissue, and to achieve this good primary stability conditions is mandatory. Aseptic loosening is the main cause of failure in total hip arthroplasty. This is a phenomenon in which relative movements occur, due to complex interactions of mechanical and biological factors, and these micromovements put the implant functionality at risk. To minimize possible damage, it greatly depends on the early detection of loosening. For this purpose, various techniques have been tested both in vivo and in vitro: numerical analysis and experimental techniques based on sensors for movements caused by naturally or artificially transmitted loads, such as impacts or vibrations at different frequencies. The assemblies and methods applied are heterogeneous and, in many cases, they are complex and expensive, with no agreement on the use of a simple and effective technique for general purposes. Likewise, in current regulations for governing the conditions to be fulfilled by the prosthesis before going to market, there is no specific section related to the evaluation of the femoral stem design in relation to primary stability. The aim of this thesis is to develop a in vitro methodology for analyzing the stability of an implanted femoral stem, in order to assess the implantation techniques and the different prosthesis designs prior to its offer in the market. We also propose as a fundamental requirement that the developed testing method should be simple, reversible, repeatable, non-destructive, with close monitoring of parameters (boundary conditions of loads and displacements) and with the availability of a register system to record and interpret results in a fast, reliable and affordable manner. As a preliminary step, we have performed a qualitative analysis of the contact problems in the bone-stem interface, through the application of a continuous field optomechanical technique (photoelasticity). For this proposal three 2D models of bone–stem set, has been built simulating three interface contact types: loosened an unbounded contact, unbounded and fixed contact, and bounded homogeneous contact. By means of applying the same load to each model, and using the stress freezing technique, it has displayed the corresponding stress states, being more severe as expected, in the unbounded union model. In any case, the results clearly show the complexity of the interface contact problem, and they confirm the need for experimental studies about this problem. Afterward a free oscillation dynamic test has been done using resistive strain gauge sensors. Test samples have been femur bones in all possible variants: simplified models, standardized synthetic bone, and dry and cool cadaveric bones. An embedding system at the distal end of the sample with strong control of the anchoring conditions has been designed. The free oscillation of the sample has been obtained by the instantaneous release of a static load, which was previously determined and applied to the sample through a testing machine or using the gravity force. Each sample was equipped with conventional strain gauges whose signal is registered with a marketed dynamic equipment. Then, it has applied a signal processing procedure to delimit, filter and present the time and frequency response signals from the sensors. Results are interpreted by comparing different trials: the test is applied to an intact femur sample which is taken as a reference, and then this test is repeated over the same sample with an implanted prosthesis. From comparison between results, immediate conclusions about the effects of the implantation of the prosthesis can be obtained. It must be said that the implementation has been made by an expert orthopedic surgeon using the same techniques and instruments as those used in clinical surgery. He has worked with three commercial femoral stems. From the results obtained in the time and frequency domains for the different applications the following conclusions have been established: Feasibility of the different types of synthetic samples: simplified models and standardized synthetic femur. Repeatability, linearity and reversibility of the testing method. Consistency of results with theoretical values deduced from the bars free oscillations theory. Effects of introduction of femoral stems in the amplitude, damping and frequencies of oscillations Detection of micromobility associated harmonics. This methodology has been proved suitable to be included in the standardization process of arthroplasty prosthesis, it is universally applicable and it allows establishing new methods for the analysis, detection and characterization of prosthesis micromobility due to functional loads.