949 resultados para field testing
Resumo:
Several competing hypotheses attempt to explain how environmental conditions affect mass-independent basal metabolic rate (BMR) in mammals. One of the most inclusive and yet debatable hypotheses is the one that associates BMR with food habits, including habitat productivity. These effects have been widely investigated at the interspecific level under the assumption that for any given species all traits are fixed. Consequently, the variation among individuals is largely ignored. Intraspecific analysis of physiological traits has the potential to compensate for many of the pitfalls associated with interspecific analyses and, thus, to be a useful approach for evaluating hypotheses regarding metabolic adaptation. In this study, we investigated the effects of food quality, availability, and predictability on the BMR of the leaf-eared mouse Phyllotis darwini. BMR was measured on freshly caught animals from the field, since they experience natural seasonal variations in environmental factors ( and, hence, variations in habitat productivity) and diet quality. BMR was significantly correlated with the proportion of dietary plants and seeds. In addition, BMR was significantly correlated with monthly habitat productivity. Path analysis indicated that, in our study, habitat productivity was responsible for the observed changes in BMR, while diet per se had no effect on this variable.
Resumo:
We describe the design, manufacturing, and testing results of a Nb3Sn superconducting coil in which TiAIV alloys were used instead of stainless steel to reduce the magnetization contribution caused by the heat treatment for the A-15 Nb-3 Sn phase formation that affects the magnetic field homogeneity. Prior to the coil manufacturing several structural materials were studied and evaluated in terms of their mechanical and magnetic properties in as-worked, welded, and heat-treated conditions. The manufacturing process employed the wind-and-react technique followed by vacuum-pressure impregnation(VPI) at 1 MPa atm. The critical steps of the manufacturing process, besides the heat treatment and impregnation, are the wire splicing and joint manufacturing in which copper posts supported by Si3N4 ceramic were used. The coil was tested with and without a background NbTi coil and the results have shown performance exceeding the design quench current confirming the successful coil construction.
Resumo:
This paper presents a new non-destructive testing (NDT) for reinforced concrete structures, in order to identify the components of their reinforcement. A time varying electromagnetic field is generated close to the structure by electromagnetic devices specially designed for this purpose. The presence of ferromagnetic materials (the steel bars of the reinforcement) immersed in the concrete disturbs the magnetic field at the surface of the structure. These field alterations are detected by sensors coils placed on the concrete surface. Variations in position and cross section (the size) of steel bars immersed in concrete originate slightly different values for the induced voltages at the coils.. The values for the induced voltages were obtained in laboratory tests, and multi-layer perceptron artificial neural networks with Levemberg-Marquardt training algorithm were used to identify the location and size of the bar. Preliminary results can be considered very good.
Resumo:
Negative dimensional integration method (NDIM) is a technique to deal with D-dimensional Feynman loop integrals. Since most of the physical quantities in perturbative Quantum Field Theory (pQFT) require the ability of solving them, the quicker and easier the method to evaluate them the better. The NDIM is a novel and promising technique, ipso facto requiring that we put it to test in different contexts and situations and compare the results it yields with those that we already know by other well-established methods. It is in this perspective that we consider here the calculation of an on-shell two-loop three point function in a massless theory. Surprisingly this approach provides twelve non-trivial results in terms of double power series. More astonishing than this is the fact that we can show these twelve solutions to be different representations for the same well-known single result obtained via other methods. It really comes to us as a surprise that the solution for the particular integral we are dealing with is twelvefold degenerate.
Resumo:
This paper presents an experimental research on the use of eddy current testing (ECT) and artificial neural networks (ANNs) in order to identify the gauge and position of steel bars immersed in concrete structures. The paper presents details of the ECT probe and concrete specimens constructed for the tests, and a study about the influence of the concrete on the values of measured voltages. After this, new measurements were done with a greater number of specimens, simulating a field condition and the results were used to generate training and validation vectors for multilayer perceptron ANNs. The results show a high percentage of correct identification with respect to both, the gauge of the bar and of the thickness of the concrete cover. 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
In this paper, we report on a field experiment being carried out in a Typic Eutrorthox. The experiment was initiated in the 1997-98 agricultural season as a randomized block design with four treatments (0, 5, 10, and 20 t ha -1) of sewage sludge and five replicates. Compound soil samples were obtained from 20 subsamples collected at depths of 0-0.1 and 0.1-0.2 m. Cu, Fe, Mn, and Zn concentrations were extracted with DTPA pH 7.3; 0.1 mol L -1 HCl, Mehlich-I, Mehlich-III, and 0.01 mol L-1 CaCl 2. Metal concentrations were determined via atomic absorption spectrometry. Diagnostic leaves and the whole above-ground portion of plants were collected to determine Cu, Fe, Mn, and Zn concentrations extracted by nitric-perchloric digestion and later determined via atomic absorption spectrometry. Sewage sludge application caused increases in the concentrations of soil Cu, Fe, and Mn in samples taken from the 0-0.1 m depth evaluated by the extractants Mehlich-I, Mehlich-III, 0.01 mol L-1 HCl and DTPA pH 7.3. None of the extractants provided efficient estimates of changes in Mn concentrations. The acid extractants extracted more Cu, Fe, Mn, and Zn than the saline and chelating solutions. The highest concentrations of Cu, Fe, and Zn were obtained with Mehlich-III, while the highest concentrations of Mn were obtained with HCl. We did not observe a correlation between the extractants and the concentrations of elements in the diagnostic leaves nor in the tissues of the whole maize plant (Zea mays L.). 2013 Springer Science+Business Media Dordrecht.
Resumo:
The rainforest of Mexico has been degraded and severely fragmented, and urgently require restoration. However, the practice of restoration has been limited by the lack of species-specific data on survival and growth responses to local environmental variation. This study explores the differential performance of 14 wet tropical early-, mid- or late-successional tree species that were grown in two abandoned pastures with contrasting land-use histories. After 18 months, seedling survival and growth of at least 7 of the 14 tree species studied were significantly higher in the site with a much longer history of land use (site 2). Saplings of the three early-successional species showed exceptional growth rates. However, differences in performance were noted in relation to the differential soil properties between the experimental sites. Mid-successional species generally showed slow growth rates but high seedling survival, whereas late-successional species exhibited poor seedling survival at both the study sites. Stepwise linear regressions revealed that the species integrated response index combining survivorship and growth measurements, was influenced mostly by differences in soil pH between the two abandoned pastures. Our results suggest that local environmental variation among abandoned pastures of contrasting land-use histories influences sapling survival and growth. Furthermore, the similarity of responses among species with the same successional status allowed us to make some preliminary site and species-specific silvicultural recommendations. Future field experiments should extend the number of species and the range of environmental conditions to identify site generalists or more narrowly adapted species, that we would call sensitive.
Resumo:
The interactions between outdoor bronzes and the environment, which lead to bronze corrosion, require a better understanding in order to design effective conservation strategies in the Cultural Heritage field. In the present work, investigations on real patinas of the outdoor monument to Vittorio Bottego (Parma, Italy) and laboratory studies on accelerated corrosion testing of inhibited (by silane-based films, with and without ceria nanoparticles) and non-inhibited quaternary bronzes are reported and discussed. In particular, a wet&dry ageing method was used both for testing the efficiency of the inhibitor and for patinating bronze coupons before applying the inhibitor. A wide range of spectroscopic techniques has been used, for characterizing the core metal (SEM+EDS, XRF, AAS), the corroded surfaces (SEM+EDS, portable XRF, micro-Raman, ATR-IR, Py-GC-MS) and the ageing solutions (AAS). The main conclusions were: 1. The investigations on the Bottego monument confirmed the differentiation of the corrosion products as a function of the exposure geometry, already observed in previous works, further highlighting the need to take into account the different surface features when selecting conservation procedures such as the application of inhibitors (i.e. the relative Sn enrichment in unsheltered areas requires inhibitors which effectively interact not only with Cu but also with Sn). 2. The ageing (pre-patination) cycle on coupons was able to reproduce the relative Sn enrichment that actually happens in real patinated surfaces, making the bronze specimens representative of the real support for bronze inhibitors. 3. The non-toxic silane-based inhibitors display a good protective efficiency towards pre-patinated surfaces, differently from other widely used inhibitors such as benzotriazole (BTA) and its derivatives. 4. The 3-mercapto-propyl-trimethoxy-silane (PropS-SH) additivated with CeO2 nanoparticles generally offered a better corrosion protection than PropS-SH.
Resumo:
The research interest of this study is to investigate surface immobilization strategies for proteins and other biomolecules by the surface plasmon field-enhanced fluorescence spectroscopy (SPFS) technique. The recrystallization features of the S-layer proteins and the possibility of combining the S-layer lattice arrays with other functional molecules make this protein a prime candidate for supramolecular architectures. The recrystallization behavior on gold or on the secondary cell wall polymer (SCWP) was recorded by SPR. The optical thicknesses and surface densities for different protein layers were calculated. In DNA hybridization tests performed in order to discriminate different mismatches, recombinant S-layer-streptavidin fusion protein matrices showed their potential for new microarrays. Moreover, SCWPs coated gold chips, covered with a controlled and oriented assembly of S-layer fusion proteins, represent an even more sensitive fluorescence testing platform. Additionally, S-layer fusion proteins as the matrix for LHCII immobilization strongly demonstrate superiority over routine approaches, proving the possibility of utilizing them as a new strategy for biomolecular coupling. In the study of the SPFS hCG immunoassay, the biophysical and immunological characteristics of this glycoprotein hormone were presented first. After the investigation of the effect of the biotin thiol dilution on the coupling efficiently, the interfacial binding model including the appropriate binary SAM structure and the versatile streptavidin-biotin interaction was chosen as the basic supramolecular architecture for the fabrication of a SPFS-based immunoassay. Next, the affinity characteristics between different antibodies and hCG were measured via an equilibrium binding analysis, which is the first example for the titration of such a high affinity interaction by SPFS. The results agree very well with the constants derived from the literature. Finally, a sandwich assay and a competitive assay were selected as templates for SPFS-based hCG detection, and an excellent LOD of 0.15 mIU/ml was attained via the one step sandwich method. Such high sensitivity not only fulfills clinical requirements, but is also better than most other biosensors. Fully understanding how LHCII complexes transfer the sunlight energy directionally and efficiently to the reaction center is potentially useful for constructing biomimetic devices as solar cells. After the introduction of the structural and the spectroscopic features of LHCII, different surface immobilization strategies of LHCII were summarized next. Among them the strategy based on the His-tag and the immobilized metal (ion) affinity chromatography (IMAC) technique were of great interest and resulted in different kinds of home-fabricated His-tag chelating chips. Their substantial protein coupling capacity, maintenance of high biological activity and a remarkably repeatable binding ability on the same chip after regeneration was demonstrated. Moreover, different parameters related to the stability of surface coupled reconstituted complexes, including sucrose, detergent, lipid, oligomerization, temperature and circulation rate, were evaluated in order to standardize the most effective immobilization conditions. In addition, partial lipid bilayers obtained from LHCII contained proteo-liposomes fusion on the surface were observed by the QCM technique. Finally, the inter-complex energy transfer between neighboring LHCIIs on a gold protected silver surface by excitation with a blue laser ( = 473nm) was recorded for the first time, and the factors influencing the energy transfer efficiency were evaluated.
Resumo:
The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
Natural stones have been widely used in the construction field since antiquity. Building materials undergo decay processes due to mechanical,chemical, physical and biological causes that can act together. Therefore an interdisciplinary approach is required in order to understand the interaction between the stone and the surrounding environment. Utilization of buildings, inadequate restoration activities and in general anthropogenic weathering factors may contribute to this degradation process. For this reasons, in the last few decades new technologies and techniques have been developed and introduced in the restoration field. Consolidants are largely used in restoration and conservation of cultural heritage in order to improve the internal cohesion and to reduce the weathering rate of building materials. It is important to define the penetration depth of a consolidant for determining its efficacy. Impregnation mainly depends on the microstructure of the stone (i.e. porosity) and on the properties of the product itself. Throughout this study, tetraethoxysilane (TEOS) applied on globigerina limestone samples has been chosen as object of investigation. After hydrolysis and condensation, TEOS deposits silica gel inside the pores, improving the cohesion of the grains. X-ray computed tomography has been used to characterize the internal structure of the limestone samples,treated and untreated with a TEOS-based consolidant. The aim of this work is to investigate the penetration depth and the distribution of the TEOS inside the porosity, using both traditional approaches and advanced X-ray tomographic techniques, the latter allowing the internal visualization in three dimensions of the materials. Fluid transport properties and porosity have been studied both at macroscopic scale, by means of capillary uptake tests and radiography, and at microscopic scale,investigated with X-ray Tomographic Microscopy (XTM). This allows identifying changes in the porosity, by comparison of the images before and after the treatment, and locating the consolidant inside the stone. Tests were initially run at University of Bologna, where characterization of the stone was carried out. Then the research continued in Switzerland: X-ray tomography and radiography were performed at Empa, Swiss Federal Laboratories for Materials Science and Technology, while XTM measurements with synchrotron radiation were run at Paul Scherrer Institute in Villigen.
Resumo:
Despite the many proposed advantages related to nanotechnology, there are increasing concerns as to the potential adverse human health and environmental effects that the production of, and subsequent exposure to nanoparticles (NPs) might pose. In regard to human health, these concerns are founded upon the plethora of knowledge gained from research relating to the effects observed following exposure to environmental air pollution. It is known that increased exposure to environmental air pollution can cause reduced respiratory health, as well as exacerbate pre-existing conditions such as cardiovascular disease and chronic obstructive pulmonary disease. Such disease states have also been associated with exposure to the NP component contained within environmental air pollution, raising concerns as to the effects of NP exposure. It is not only exposure to accidentally produced NPs however, which should be approached with caution. Over the past decades, NPs have been specifically engineered for a wide range of consumer, industrial and technological applications. Due to the inevitable exposure of NPs to humans, owing to their use in such applications, it is therefore imperative that an understanding of how NPs interact with the human body is gained. In vivo research poses a beneficial model for gaining immediate and direct knowledge of human exposure to such xenobiotics. This research outlook however, has numerous limitations. Increased research using in vitro models has therefore been performed, as these models provide an inexpensive and high-throughput alternative to in vivo research strategies. Despite such advantages, there are also various restrictions in regard to in vitro research. Therefore, the aim of this review, in addition to providing a short perspective upon the field of nanotoxicology, is to discuss (1) the advantages and disadvantages of in vitro research and (2) how in vitro research may provide essential information pertaining to the human health risks posed by NP exposure.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.