20 resultados para Non-destructive testing
em University of Queensland eSpace - Australia
Resumo:
The technique of permanently attaching interdigital transducers (IDT) to either flat or curved structural surfaces to excite single Lamb wave mode has demonstrated great potential for quantitative non-destructive evaluation and smart materials design, In this paper, the acoustic wave field in a composite laminated plate excited by an IDT is investigated. On the basis of discrete layer theory and a multiple integral transform method, an analytical-numerical approach is developed to evaluate the surface velocity response of the plate due to the IDTs excitation. In this approach, the frequency spectrum and wave number spectrum of the output of IDT are obtained directly. The corresponding time domain results are calculated by applying a standard inverse fast Fourier transformation technique. Numerical examples are presented to validate the developed method and show the ability of mode selection and isolation. A new effective way of transfer function estimation and interpretation is presented by considering the input wave number spectrum in addition to the commonly used input frequency spectrum. The new approach enables the simple physical evaluation of the influences of IDT geometrical features such as electrode finger widths and overall dimension and excitation signal properties on the input-output characteristics of IDT. Finally, considering the convenience of Mindlin plate wave theory in numerical computations as well as theoretical analysis, the validity is examined of using this approximate theory to design IDT for the excitation of the first and second anti-symmetric Lamb modes. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The Green Fluorescent Protein (GFP) from Aequorea victor-in has begun to be used as a reporter protein in plants. It is particularly useful as GFP fluorescence can be detected in a non-destructive manner, whereas detection of enzyme-based reporters often requires destruction of the plant tissue. The use of GFP as a reporter enables transgenic plant tissues to be screened in vivo at any growth stage. Quantification of GFP in transgenic plant extracts will increase the utility of GFP as a reporter protein. We report herein the quantification of a mGFP5-ER Variant in tobacco leaf extracts by UV excitation and a sGFP(S65T) variant in sugarcane leaf and callus extracts by blue light excitation using the BioRad VersaFluor(TM) Fluorometer System or the Labsystems Fluoroskan Ascent FL equipped with a narrow band emission filter (510 +/- 5 nm). The GFP concentration in transgenic plant extracts was determined from a GFP-standard series prepared in untransformed plant extract with concentrations ranging from 0.1 to 4 mu g/ml of purified rGFP. Levels of sgfp(S65T) expression, driven by the maize ubiquitin promoter, in sugarcane calli and leaves ranged up to 0.525 mu g and 2.11 mu g sGFP(S65T) per mg of extractable protein respectively. In tobacco leaves the expression of mgfPS-ER, driven by the cauliflower mosaic virus (CaMV) 35S promoter, ranged up to 7.05 mu g mGFP5-ER per mg extractable protein.
Resumo:
Rectangular piezoceramic transducers are widely used in ultrasonic evaluation and health monitoring techniques and structural vibration control applications. In this paper the flexural waves excited by rectangular transducers adhesively attached to isotropic plates are investigated. In view of the difficulties in developing accurate analytical models describing the transfer characteristics of the transducer due to the complex electromechanical transduction processes and transducer-structure interactions involved, a combined theoretical-experimental approach is developed. A multiple integral transform method is used to describe the propagation behaviour of the waves in the plates, while a heterodyne Doppler laser vibrometer is employed as a non-contact receiver device. This combined theoretical-experimental approach enables the efficient characterization of the electromechanical transfer properties of the piezoelectric transducer which is essential for the development of optimized non-destructive evaluation systems. The results show that the assumption of a uniform contact pressure distribution between the transducer and the plate can accurately predict the frequency spectrum and time domain response signals of the propagating waves along the main axes of the rectangular transmitter element.
Resumo:
The technique of permanently attaching piezoelectric transducers to structural surfaces has demonstrated great potential for quantitative non-destructive evaluation and smart materials design. For thin structural members such as composite laminated plates, it has been well recognized that guided Lamb wave techniques can provide a very sensitive and effective means for large area interrogation. However, since in these applications multiple wave modes are generally generated and the individual modes are usually dispersive, the received signals are very complex and difficult to interpret. An attractive way to deal with this problem has recently been introduced by applying piezoceramic transducer arrays or interdigital transducer (IDT) technologies. In this paper, the acoustic wave field in composite laminated plates excited by piezoceramic transducer arrays or IDT is investigated. Based on dynamic piezoelectricity theory, a discrete layer theory and a multiple integral transform method, an analytical-numerical approach is developed to evaluate the input impedance characteristics of the transducer and the surface velocity response of the plate. The method enables the quantitative evaluation of the influence of the electrical characteristics of the excitation circuit, the geometric and piezoelectric properties of the transducer array, and the mechanical and geometrical features of the laminate. Numerical results are presented to validate the developed method and show the ability of single wave mode selection and isolation. The results show that the interaction between individual elements of the piezoelectric array has a significant influence on the performance of the IDT, and these effects can not be neglected even in the case of low frequency excitation. It is also demonstrated that adding backing materials to the transducer elements can be used to improve the excitability of specific wave modes. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This study investigated the influence of harvest residue management practices on soil organic matter (SOM) composition and quality from two second-rotation Eucalyptus globulus plantations in southwestern Australia, using solid-state 13C nuclear magnetic resonance (NMR) spectroscopy with cross-polarisation and magic-angle-spinning (CPMAS). Soil samples (0–5 cm) were collected every 12 months for 5 years from two sites that had contrasting soil types and fertility. Harvest residue management treatments established at both sites were (a) no harvest residues; and (b) double harvest residues. The use of 13C CPMAS and DD NMR spectroscopy enabled the successful non-destructive detection of SOM quality changes in the two E. globulus plantations. Relative intensities of 13C CPMAS NMR spectral regions were similar at both sites, and for both harvest residue treatments, indicating that SOM composition was also similar. Dipolar dephasing (DD) NMR spectra revealed resonances in SOM assigned to lignin and tannin structures, with larger resonances in the carbonyl and alkyl C regions that were indicative of cuticular material, enabling detection of changes in SOM quality. Retention of double harvest residues on the soil surface increased the soil quality compared with removal of all harvest residues at both sites as indicated by the NMR aromaticities, but this was most noticeable at Manjimup, which had greater initial soil fertility.
Resumo:
The development of TDR for measurement of soil water content and electrical conductivity has resulted in a large shift in measurement methods for a breadth of soil and hydrological characterization efforts. TDR has also opened new possibilities for soil and plant research. Five examples show how TDR has enhanced our ability to conduct our soil- and plant-water research. (i) Oxygen is necessary for healthy root growth and plant development but quantitative evaluation of the factors controlling oxygen supply in soil depends on knowledge of the soil water content by TDR. With water content information we have modeled successfully some impact of tillage methods on oxygen supply to roots and their growth response. (ii) For field assessment of soil mechanical properties influencing crop growth, water content capability was added to two portable soil strength measuring devices; (a) A TDT (Time Domain Transmittivity)-equipped soil cone penetrometer was used to evaluate seasonal soil strengthwater content relationships. In conventional tillage systems the relationships are dynamic and achieve the more stable no-tillage relationships only relatively late in each growing season; (b) A small TDR transmission line was added to a modified sheargraph that allowed shear strength and water content to be measured simultaneously on the same sample. In addition, the conventional graphing procedure for data acquisition was converted to datalogging using strain gauges. Data acquisition rate was improved by more than a factor of three with improved data quality. (iii) How do drought tolerant plants maintain leaf water content? Non-destructive measurement of TDR water content using a flat serpentine triple wire transmission line replaces more lengthy procedures of measuring relative water content. Two challenges remain: drought-stressed leaves alter salt content, changing electrical conductivity, and drought induced changes in leaf morphology affect TDR measurements. (iv) Remote radar signals are reflected from within the first 2 cm of soil. Appropriate calibration of radar imaging for soil water content can be achieved by a parallel pair of blades separated by 8 cm, reaching 1.7 cm into soil and forming a 20 cm TDR transmission line. The correlation between apparent relative permittivity from TDR and synthetic aperture radar (SAR) backscatter coefficient was 0.57 from an airborne flyover. These five examples highlight the diversity in the application of TDR in soil and plant research.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
The testing of concurrent software components can be difficult due to the inherent non-determinism present in these components. For example, if the same test case is run multiple times, it may produce different results. This non-determinism may lead to problems with determining expected outputs. In this paper, we present and discuss several possible solutions to this problem in the context of testing concurrent Java components using the ConAn testing tool. We then present a recent extension to the tool that provides a general solution to this problem that is sufficient to deal with the level of non-determinism that we have encountered in testing over 20 components with ConAn. © 2005 IEEE
Resumo:
To simulate cropping systems, crop models must not only give reliable predictions of yield across a wide range of environmental conditions, they must also quantify water and nutrient use well, so that the status of the soil at maturity is a good representation of the starting conditions for the next cropping sequence. To assess the suitability for this task a range of crop models, currently used in Australia, were tested. The models differed in their design objectives, complexity and structure and were (i) tested on diverse, independent data sets from a wide range of environments and (ii) model components were further evaluated with one detailed data set from a semi-arid environment. All models were coded into the cropping systems shell APSIM, which provides a common soil water and nitrogen balance. Crop development was input, thus differences between simulations were caused entirely by difference in simulating crop growth. Under nitrogen non-limiting conditions between 73 and 85% of the observed kernel yield variation across environments was explained by the models. This ranged from 51 to 77% under varying nitrogen supply. Water and nitrogen effects on leaf area index were predicted poorly by all models resulting in erroneous predictions of dry matter accumulation and water use. When measured light interception was used as input, most models improved in their prediction of dry matter and yield. This test highlighted a range of compensating errors in all modelling approaches. Time course and final amount of water extraction was simulated well by two models, while others left up to 25% of potentially available soil water in the profile. Kernel nitrogen percentage was predicted poorly by all models due to its sensitivity to small dry matter changes. Yield and dry matter could be estimated adequately for a range of environmental conditions using the general concepts of radiation use efficiency and transpiration efficiency. However, leaf area and kernel nitrogen dynamics need to be improved to achieve better estimates of water and nitrogen use if such models are to be use to evaluate cropping systems. (C) 1998 Elsevier Science B.V.
Resumo:
Over half a million heroin misusers receive oral methadone maintenance treatment world-wide1 but the maintenance prescription of injectable opioid drugs, like heroin, remains controversial. In 1992 Switzerland began a large scale evaluation of heroin and other injectable opiate prescribing that eventually involved 1035 misusers. 2 3 The results of the evaluation have recently been reported.4 These show that it was feasible to provide heroin by intravenous injection at a clinic, up to three times a day, for seven days a week. This was done while maintaining good drug control, good order, client safety, and staff morale. Patients were stabilised on 500 to 600 mg heroin daily without evidence of increasing tolerance. Retention in treatment was 89% at six months and 69% at 18 months.4 The self reported use of non-prescribed heroin fell signifianctly, but other drug use was minimally affected. The death rate was 1% per year, and there were no deaths from overdose among participants . . . [Full text of this article]
Resumo:
Previous work on generating state machines for the purpose of class testing has not been formally based. There has also been work on deriving state machines from formal specifications for testing non-object-oriented software. We build on this work by presenting a method for deriving a state machine for testing purposes from a formal specification of the class under test. We also show how the resulting state machine can be used as the basis for a test suite developed and executed using an existing framework for class testing. To derive the state machine, we identify the states and possible interactions of the operations of the class under test. The Test Template Framework is used to formally derive the states from the Object-Z specification of the class under test. The transitions of the finite state machine are calculated from the derived states and the class's operations. The formally derived finite state machine is transformed to a ClassBench testgraph, which is used as input to the ClassBench framework to test a C++ implementation of the class. The method is illustrated using a simple bounded queue example.
Resumo:
Recent empirical studies have found significant evidence of departures from competition in the input side of the Australian bread, breakfast cereal and margarine end-product markets. For example, Griffith (2000) found that firms in some parts of the processing and marketing sector exerted market power when purchasing grains and oilseeds from farmers. As noted at the time, this result accorded well with the views of previous regulatory authorities (p.358). In the mid-1990s, the Prices Surveillence Authority (PSA 1994) determined that the markets for products contained in the Breakfast Cereals and Cooking Oils and Fats indexes were "not effectively competitive" (p.14). The PSA consequently maintained price surveillence on the major firms in this product group. The Griffith result is also consistent with the large number of legal judgements against firms in this sector over the past decade for price fixing or other types of non-competitive behaviour. For example, bread manufacturer George Weston was fined twice during 2000 for non-competitive conduct and the ACCC has also recently pursued and won cases against retailer Safeway in grains and oilseeds product lines.
Resumo:
Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.