967 resultados para testing process
Resumo:
Natural stones have been widely used in the construction field since antiquity. Building materials undergo decay processes due to mechanical,chemical, physical and biological causes that can act together. Therefore an interdisciplinary approach is required in order to understand the interaction between the stone and the surrounding environment. Utilization of buildings, inadequate restoration activities and in general anthropogenic weathering factors may contribute to this degradation process. For this reasons, in the last few decades new technologies and techniques have been developed and introduced in the restoration field. Consolidants are largely used in restoration and conservation of cultural heritage in order to improve the internal cohesion and to reduce the weathering rate of building materials. It is important to define the penetration depth of a consolidant for determining its efficacy. Impregnation mainly depends on the microstructure of the stone (i.e. porosity) and on the properties of the product itself. Throughout this study, tetraethoxysilane (TEOS) applied on globigerina limestone samples has been chosen as object of investigation. After hydrolysis and condensation, TEOS deposits silica gel inside the pores, improving the cohesion of the grains. X-ray computed tomography has been used to characterize the internal structure of the limestone samples,treated and untreated with a TEOS-based consolidant. The aim of this work is to investigate the penetration depth and the distribution of the TEOS inside the porosity, using both traditional approaches and advanced X-ray tomographic techniques, the latter allowing the internal visualization in three dimensions of the materials. Fluid transport properties and porosity have been studied both at macroscopic scale, by means of capillary uptake tests and radiography, and at microscopic scale,investigated with X-ray Tomographic Microscopy (XTM). This allows identifying changes in the porosity, by comparison of the images before and after the treatment, and locating the consolidant inside the stone. Tests were initially run at University of Bologna, where characterization of the stone was carried out. Then the research continued in Switzerland: X-ray tomography and radiography were performed at Empa, Swiss Federal Laboratories for Materials Science and Technology, while XTM measurements with synchrotron radiation were run at Paul Scherrer Institute in Villigen.
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
In most real-life environments, mechanical or electronic components are subjected to vibrations. Some of these components may have to pass qualification tests to verify that they can withstand the fatigue damage they will encounter during their operational life. In order to conduct a reliable test, the environmental excitations can be taken as a reference to synthesize the test profile: this procedure is referred to as “test tailoring”. Due to cost and feasibility reasons, accelerated qualification tests are usually performed. In this case, the duration of the original excitation which acts on the component for its entire life-cycle, typically hundreds or thousands of hours, is reduced. In particular, the “Mission Synthesis” procedure lets to quantify the induced damage of the environmental vibration through two functions: the Fatigue Damage Spectrum (FDS) quantifies the fatigue damage, while the Maximum Response Spectrum (MRS) quantifies the maximum stress. Then, a new random Power Spectral Density (PSD) can be synthesized, with same amount of induced damage, but a specified duration in order to conduct accelerated tests. In this work, the Mission Synthesis procedure is applied in the case of so-called Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic contributions, in the form of sine tones typically due to some rotating parts of the system (e.g. helicopters, engine-mounted components, …). In fact, a proper test tailoring should not only preserve the accumulated fatigue damage, but also the “nature” of the excitation (in this case the sinusoidal components superimposed on the random process) in order to obtain reliable results. The classic time-domain approach is taken as a reference for the comparison of different methods for the FDS calculation in presence of Sine-on-Random vibrations. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is presented.
Resumo:
In this article we propose a bootstrap test for the probability of ruin in the compound Poisson risk process. We adopt the P-value approach, which leads to a more complete assessment of the underlying risk than the probability of ruin alone. We provide second-order accurate P-values for this testing problem and consider both parametric and nonparametric estimators of the individual claim amount distribution. Simulation studies show that the suggested bootstrap P-values are very accurate and outperform their analogues based on the asymptotic normal approximation.
Resumo:
Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database. 48 Caucasian and Asian bone models (left and right) from the database were used for the preliminary optimization process and validation of the fibula plate. The implant was constructed to fit bilaterally in a lateral position of the fibula. Then a biomechanical comparison of the designed implant to the current gold standard in the treatment of distal fibular fractures (locking 1/3 tubular plate) was conducted. Finally, a clinical surveillance study to evaluate the grade of implant fit achieved was performed. The results showed that with a virtual anatomic database it was possible to design a fibula plate with an optimized fit for a large proportion of the population. Biomechanical testing showed the novel fibula plate to be superior to 1/3 tubular plates in 4-point bending tests. The clinical application showed a very high degree of primary implant fit. Only in a small minority of cases further intra-operative implant bending was necessary. Therefore, the goal to develop an implant for the treatment of distal fibular fractures based on the evidence of a large anatomical database could be attained. Biomechanical testing showed good results regarding the stability and the clinical application confirmed the high grade of anatomical fit.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.
Resumo:
ASTM A529 carbon¿manganese steel angle specimens were joined by flash butt welding and the effects of varying process parameter settings on the resulting welds were investigated. The weld metal and heat affected zones were examined and tested using tensile testing, ultrasonic scanning, Rockwell hardness testing, optical microscopy, and scanning electron microscopy with energy dispersive spectroscopy in order to quantify the effect of process variables on weld quality. Statistical analysis of experimental tensile and ultrasonic scanning data highlighted the sensitivity of weld strength and the presence of weld zone inclusions and interfacial defects to the process factors of upset current, flashing time duration, and upset dimension. Subsequent microstructural analysis revealed various phases within the weld and heat affected zone, including acicular ferrite, Widmanstätten or side-plate ferrite, and grain boundary ferrite. Inspection of the fracture surfaces of multiple tensile specimens, with scanning electron microscopy, displayed evidence of brittle cleavage fracture within the weld zone for certain factor combinations. Test results also indicated that hardness was increased in the weld zone for all specimens, which can be attributed to the extensive deformation of the upset operation. The significance of weld process factor levels on microstructure, fracture characteristics, and weld zone strength was analyzed. The relationships between significant flash welding process variables and weld quality metrics as applied to ASTM A529-Grade 50 steel angle were formalized in empirical process models.
Resumo:
The transition in Central and Eastern Europe since the late 1980s has provided a testing ground for classic propositions. This project looked at the impact of privatisation on private consumption, using the Czech experiment of voucher privatisation to test the permanent income hypothesis. This form of privatisation moved state assets to individuals and represented an unexpected windfall gain for participants in the scheme. Whether the windfall was consumed or saved offers a clear test of the permanent income hypothesis. Of a total population of 10 million, 6 million Czechs, i.e. virtually every household, participated in the scheme,. In a January 1996 survey, 1263 individuals were interviewed , 75% of whom had taken part. The data obtained suggests that only a small quantity of transferred assets were cashed in and spent on consumption, providing support for the permanent income hypothesis. The fraction of the windfall consumed grows with age, as would be predicted from the lower life expectancy of older consumers. The most interesting deviation was for people aged 26 to 35, who apparently consumed more that they would if the windfall were annuitised. As these people are at the stage in their lives when they would otherwise be borrowing to cover consumption related to establishing a family, etc., this is however consistent with the permanent income hypothesis, which predicts that individuals who would otherwise borrow money would use the windfall to avoid doing so.
Resumo:
Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.
Resumo:
As the demand for miniature products and components continues to increase, the need for manufacturing processes to provide these products and components has also increased. To meet this need, successful macroscale processes are being scaled down and applied at the microscale. Unfortunately, many challenges have been experienced when directly scaling down macro processes. Initially, frictional effects were believed to be the largest challenge encountered. However, in recent studies it has been found that the greatest challenge encountered has been with size effects. Size effect is a broad term that largely refers to the thickness of the material being formed and how this thickness directly affects the product dimensions and manufacturability. At the microscale, the thickness becomes critical due to the reduced number of grains. When surface contact between the forming tools and the material blanks occur at the macroscale, there is enough material (hundreds of layers of material grains) across the blank thickness to compensate for material flow and the effect of grain orientation. At the microscale, there may be under 10 grains across the blank thickness. With a decreased amount of grains across the thickness, the influence of the grain size, shape and orientation is significant. Any material defects (either natural occurring or ones that occur as a result of the material preparation) have a significant role in altering the forming potential. To date, various micro metal forming and micro materials testing equipment setups have been constructed at the Michigan Tech lab. Initially, the research focus was to create a micro deep drawing setup to potentially build micro sensor encapsulation housings. The research focus shifted to micro metal materials testing equipment setups. These include the construction and testing of the following setups: a micro mechanical bulge test, a micro sheet tension test (testing micro tensile bars), a micro strain analysis (with the use of optical lithography and chemical etching) and a micro sheet hydroforming bulge test. Recently, the focus has shifted to study a micro tube hydroforming process. The intent is to target fuel cells, medical, and sensor encapsulation applications. While the tube hydroforming process is widely understood at the macroscale, the microscale process also offers some significant challenges in terms of size effects. Current work is being conducted in applying direct current to enhance micro tube hydroforming formability. Initially, adding direct current to various metal forming operations has shown some phenomenal results. The focus of current research is to determine the validity of this process.
Resumo:
This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.
Resumo:
Li-Fraumeni Syndrome (LFS) is a hereditary cancer syndrome which predisposes individuals to cancer beginning in childhood. These risks are spread across a lifetime, from early childhood to adulthood. Mutations in the p53 tumor suppressor gene are known to cause the majority of cases of LFS. The risk for early onset cancer in individuals with Li-Fraumeni Syndrome is high. Studies have shown that individuals with LFS have a 90% lifetime cancer risk. Children under 18 have up to a 15% chance of cancer development. Effectiveness of cancer screening and management in individuals with Li-Fraumeni Syndrome is unclear. Screening for LFS-associated cancers has not been shown to reduce mortality. Due to the lack of effective screening techniques for childhood cancers, institutions vary with regard to their policies on testing children for LFS. There are currently no national guidelines regarding predictive testing of children who are at risk of inheriting LFS. No studies have looked at parental attitudes towards predictive p53 genetic testing in their children. This was a cross-sectional pilot study aimed at describing these attitudes. We identified individuals whose children were at risk for inheriting p53 genetic mutations. These individuals were provided with surveys which included validated measures addressing attitudes and beliefs towards genetic testing. The questionnaire included qualitative and quantitative measures. Six individuals completed and returned the questionnaire with a response rate of 28.57%. In general, respondents agreed that parents should have the opportunity to obtain p53 genetic testing for their child. Parents vary in regard to their attitudes towards who should be involved in the decision making process and at what time and under what considerations testing should occur. Testing motivations cited most important by respondents included family history, planning for the future and health management. Concern for insurance genetic discrimination was cited as the most important “con” to genetic testing. Although limited by a poor response rate, this study can give health care practitioners insight into testing attitudes and beliefs of families considering pediatric genetic testing.
Resumo:
Diagnosis of primary ciliary dyskinesia (PCD) lacks a "gold standard" test and is therefore based on combinations of tests including nasal nitric oxide (nNO), high-speed video microscopy analysis (HSVMA), genotyping and transmission electron microscopy (TEM). There are few published data on the accuracy of this approach.Using prospectively collected data from 654 consecutive patients referred for PCD diagnostics we calculated sensitivity and specificity for individual and combination testing strategies. Not all patients underwent all tests.HSVMA had excellent sensitivity and specificity (100% and 93%, respectively). TEM was 100% specific, but 21% of PCD patients had normal ultrastructure. nNO (30 nL·min(-1) cut-off) had good sensitivity and specificity (91% and 96%, respectively). Simultaneous testing using HSVMA and TEM was 100% sensitive and 92% specific.In conclusion, combination testing was found to be a highly accurate approach for diagnosing PCD. HSVMA alone has excellent accuracy, but requires significant expertise, and repeated sampling or cell culture is often needed. TEM alone is specific but misses 21% of cases. nNO (≤30 nL·min(-1)) contributes well to the diagnostic process. In isolation nNO screening at this cut-off would miss ∼10% of cases, but in combination with HSVMA could reduce unnecessary further testing. Standardisation of testing between centres is a future priority.
Resumo:
Microvariant alleles, defined as alleles that contain an incomplete repeat unit, often complicate the process of DNA analysis. Understanding the molecular basis of microvariants would help to catalogue results and improve upon the analytical process involved in DNA testing. The first step is to determine the sequence/cause of a microvariant. This was done by sequencing samples that were determined to have a microvariant at the FGA or D21S11 loci. The results indicate that a .2 microvariant at the D21S11 locus is caused by a -TA- dinucleotide partial repeat before the last full TCTA repeat. The .2 microvariant at the FGA locus is caused by a -TT- dinucleotide partial repeat after the fifth full repeat and before the variable CTTT repeat motif. There are several possibilities for the reason the .2 microvariants are all the same at a locus, each of which carry implications on the forensic community. The first possibility is that the microvariants are identical by descent, which means that the microvariant is an old allele that has been passed down through the generations. The second possibility is that the microvariants are identical by state, which would mean that there is a mechanism selecting for these microvariants. Future research studying the flanking regions of these microvariants is proposed to determine which of these possibilities is the actual cause and to learn more about the molecular basis of microvariants.