904 resultados para Experimental measurements
Resumo:
Collateral circulation, defined as the supplementary vascular network that maintains cerebral blood flow (CBF) when the main vessels fail, constitutes one important defense mechanism of the brain against ischemic stroke. In the present study, continuous arterial spin labeling (CASL) was used to quantify CBF and obtain perfusion territory maps of the major cerebral arteries in spontaneously hypertensive rats (SHRs) and their normotensive Wistar-Kyoto (WKY) controls. Results show that both WKY and SHR have complementary, yet significantly asymmetric perfusion territories. Right or left dominances were observed in territories of the anterior (ACA), middle and posterior cerebral arteries, and the thalamic artery. Magnetic resonance angiography showed that some of the asymmetries were correlated with variations of the ACA. The leptomeningeal circulation perfusing the outer layers of the cortex was observed as well. Significant and permanent changes in perfusion territories were obtained after temporary occlusion of the right middle cerebral artery in both SHR and WKY, regardless of their particular dominance. However, animals with right dominance presented a larger volume change of the left perfusion territory (23 +/- 9%) than animals with left dominance (7 +/- 5%, P<0.002). The data suggest that animals with contralesional dominance primarily safeguard local CBF values with small changes in contralesional perfusion territory, while animals with ipsilesional dominance show a reversal of dominance and a substantial increase in contralesional perfusion territory. These findings show the usefulness of CASL to probe the collateral circulation.
Resumo:
L'obiettivo di questo lavoro è quello di analizzare la stabilità di uno spettro raggi X emesso da un tubo usurato per analisi cardiovascolari, in modo da verificare il suo comportamento. Successivamente questo tipo di analisi sarà effettuata su tubi CT. Per raggiungere questo scopo è stato assemblato un particolare set-up con un rivelatore al germanio criogenico in modo da avere la miglior risoluzione energetica possibile ed alcuni particolari collimatori così da ridurre il flusso fotonico per evitare effetti di pile-up. Il set-up è stato costruito in modo da avere il miglior allineamento possibile nel modo più veloce possibile, e con l'obiettivo di rendere l'intero sistema portabile. Il tubo usato è un SRM Philips tube per analisi cardiovascolari; questa scelta è stata fatta in modo da ridurre al minimo i fattori esterni (ottica elettromagnetica, emettitori) e concentrare l'attenzione solo sugli effetti, causati dalle varie esposizioni, sull'anodo (roughness e bending) e sul comportamento di essi durante il surriscaldamento e successivo raffreddamento del tubo. I risultati mostrano come durante un'esposizione alcuni fattori di usura del tubo possono influire in maniera sostanziale sullo spettro ottenuto e quindi alterare il risultato. Successivamente, nell'elaborato, mediante il software Philips di ricostruzione e simulazione dello spettro si è cercato di riprodurre, variando alcuni parametri, la differenza riscontrata sperimentalmente in modo da poter simulare l'instabilità e correggere i fattori che la causano. I risultati sono interessanti non solo per questo esperimento ma anche in ottica futura, per lo sviluppo di applicazioni come la spectral CT. Il passo successivo sarà quello di spostare l'attenzione su un CT tube e verificare se l'instabilità riscontrata in questo lavoro è persiste anche in una analisi più complessa come quella CT.
Resumo:
The façade is the visible part of a building, and generally consists of various different constructive systems. The sound reduction index of the closing elements for the openings on a room’s façade is a determining factor in the sound insulation from airborne noise inside the space. Windows are the transparent part of the façade, and to improve their thermal behaviour and control solar radiation, they are often fitted with a series of external and internal protections such as shutters, slats and blinds. This work contains a summary of studies carried out using field measurements of airborne sound insulation on façades in rooms, in application of the standard UNE-EN ISO 140-5:1999. In all the rooms the windows were fitted with shutter boxes and rolling shutters, and the acoustic tests were made with the shutter in two positions (extended and fully retracted). The results were analysed considering the window opening system (openable or sliding) and the type of glass pane (monolithic or insulating glass unit, IGU). In the case of sliding windows, the airborne sound insulation of façades is greater when the shutter is extended than when it is retracted, and this should be taken into account when applying the aforementioned standard.
Resumo:
Ultrasonic sound velocity measurements with hand-held equipment remain due to their simplicity among the most used methods for non-destructive grading of sawn woods, yet a dedicated normalization effort with respect to strength classes for Spanish species is still required. As part of an ongoing project with the aim of definition of standard testing methods, the effect of the dimensions of commonly tested Scots pine (Pinus sylvestris L.) timbers and equipment testing frequency on ultrasonic velocity were investigated. A dedicated full-wave finite-difference time-domain software allowed simulation of pulse propagation through timbers of representative length and section combinations. Sound velocity measurements vL were performed along the grain with the indirect method at 22 kHz and 45 kHz for grids of measurement points at specific distances. For sample sections larger than the cross-sectional wavelength ?RT, the simulated sound velocity vL converges to vL = (CL/?)0.5. For smaller square sections the sound velocity drops down to vL = (EL/?)0.5, where CL, EL and ? are the stiffness, E-modul and density, respectively. The experiments confirm a linear regression between time of flight and measurement distance even at less than two wavelength menor que2?L distance, the fitted sound speed values increased by 15% between the two tested frequencies.
Resumo:
Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.
Resumo:
Ocean acidification (OA) is the reduction in seawater pH due to the absorption of human-released CO2 by the world's oceans. The average surface oceanic pH is predicted to decline by 0.4 units by 2100. However, kelp metabolically modifies seawater pH via photosynthesis and respiration in some temperate coastal systems, resulting in daily pH fluctuations of up to ±0.45 units. It is unknown how these fluctuations in pH influence the growth and physiology of the kelp, or how this might change with OA. In laboratory experiments that mimicked the most extreme pH fluctuations measured within beds of the canopy-forming kelp Ecklonia radiata in Tasmania, the growth and photosynthetic rates of juvenile E. radiata were greater under fluctuating pH (8.4 in the day, 7.8 at night) than in static pH treatments (8.4, 8.1, 7.8). However, pH fluctuations had no effect on growth rates and a negative effect on photosynthesis when the mean pH of each treatment was reduced by 0.3 units. Currently, pH fluctuations have a positive effect on E. radiata but this effect could be reversed in the future under OA, which is likely to impact the future ecological dynamics and productivity of habitats dominated by E. radiata.
Resumo:
Wave measurement is of vital importance for assessing the wave power resources and for developing wave energy devices, especially for the wave energy production and the survivability of the wave energy device. Wave buoys are one of the most popular measuring technologies developed and used for long-term wave measurements. In order to figure out whether the wave characteristics can be recorded by using the wave buoys accurately, an experimental study was carried out on the performance of three wave buoy models, viz two WaveScan buoys and one ODAS buoy, in a wave tank using the European FP7 MARINET facilities. This paper presents the test results in both time and frequency domains and the comparison between the wave buoys and wave gauge measurements. The analysis results reveal that for both regular and irregular waves, the WaveScan buoys have better performances than the ODAS buoy in terms of accuracy and the WaveScan buoys measurements have a very good correlation with those from the wave gauges.
Resumo:
The viscosity of biodiesels of soybean and rapeseed biodiesels blended with mineral diesel fuel were measured at pressures of up to 200 MPa. Using a falling sinker-type viscometer reproducible viscosity data were obtained based on the time taken for a sinker to descend a fixed distance down an enclosed tube under the influence of gravity. Measurements were taken using pressures which correspond to those of interest in automotive common rail diesel engines, and at temperatures of between 25oC and 80oC. In all cases, the viscosity of the biodiesel blends were found to increase exponentially for which the blends were noted as being more viscous than pure mineral fuels. A pressure-freezing effect was not observed for the blends.
Resumo:
In situ near-IR transmittance measurements have been used to characterize the density of trapped electrons in dye-sensitized solar cells (DSCs). Measurements have been made under a range experimental conditions including during open circuit photovoltage decay and during recording of the IV characteristic. The optical cross section of electrons at 940 nm was determined by relating the IR absorbance to the density of trapped electrons measured by charge extraction. The value, σn = 5.4 × 10-18 cm2, was used to compare the trapped electron densities in illuminated DSCs under open and short circuit conditions in order to quantify the difference in the quasi Fermi level, nEF. It was found that nEF for the cells studied was 250 meV over wide range of illuminat on intensities. IR transmittance measurements have also been used to quantify shifts in conduction band energy associated with dye adsorption.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.
Resumo:
Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. The properties of the jet were examined far enough downstream to be relevant to the eventual modelling of the mixing problem. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm in a weak co-flow of 0.04 m/s. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller, which was placed in a glass-walled flume 0.4 m wide with a free surface depth of 0.15 m. The jet and scalar plume development were compared to that of a classical free round jet. Further, results pertaining to radial distribution, self similarity, standard deviation growth, maximum value decay and integral fluxes of velocity and concentration were presented and fitted with empirical correlations. Furthermore, propeller induced mixing and pollutant source concentration from a two-stroke engine were estimated.
Resumo:
Experimental results for a reactive non-buoyant plume of nitric oxide (NO) in a turbulent grid flow doped with ozone (O3) are presented. The Damkohler number (Nd) for the experiment is of order unity indicating the turbulence and chemistry have similar timescales and both affect the chemical reaction rate. Continuous measurements of two components of velocity using hot-wire anemometry and the two reactants using chemiluminescent analysers have been made. A spatial resolution for the reactants of four Kolmogorov scales has been possible because of the novel design of the experiment. Measurements at this resolution for a reactive plume are not found in the literature. The experiment has been conducted relatively close to the grid in the region where self-similarity of the plume has not yet developed. Statistics of a conserved scalar, deduced from both reactive and non-reactive scalars by conserved scalar theory, are used to establish the mixing field of the plume, which is found to be consistent with theoretical considerations and with those found by other investigators in non-reative flows. Where appropriate the reactive species means and higher moments, probability density functions, joint statistics and spectra are compared with their respective frozen, equilibrium and reaction-dominated limits deduced from conserved scalar theory. The theoretical limits bracket reactive scalar statistics where this should be so according to conserved scalar theory. Both reactants approach their equilibrium limits with greater distance downstream. In the region of measurement, the plume reactant behaves as the reactant not in excess and the ambient reactant behaves as the reactant in excess. The reactant covariance lies outside its frozen and equilibrium limits for this value of Vd. The reaction rate closure of Toor (1969) is compared with the measured reaction rate. The gradient model is used to obtain turbulent diffusivities from turbulent fluxes. Diffusivity of a non-reactive scalar is found to be close to that measured in non-reactive flows by others.
Resumo:
The Monte Carlo DICOM Tool-Kit (MCDTK) is a software suite designed for treatment plan dose verification, using the BEAMnrc and DOSXYZnrc Monte Carlo codes. MCDTK converts DICOM-format treatment plan information into Monte Carlo input files and compares the results of Monte Carlo treatment simulations with conventional treatment planning dose calculations. In this study, a treatment is planned using a commercial treatment planning system, delivered to a pelvis phantom containing ten thermoluminescent dosimeters and simulated using BEAMnrc and DOSXYZnrc using inputs derived from MCDTK. The dosimetric accuracy of the Monte Carlo data is then evaluated via comparisons with the dose distribution obtained from the treatment planning system as well as the in-phantom point dose measurements. The simulated beam arrangement produced by MCDTK is found to be in geometric agreement with the planned treatment. An isodose display generated from the Monte Carlo data by MCDTK shows general agreement with the isodose display obtained from the treatment planning system, except for small regions around density heterogeneities in the phantom, where the pencil-beam dose calculation performed by the treatment planning systemis likely to be less accurate. All point dose measurements agree with the Monte Carlo data obtained using MCDTK, within confidence limits, and all except one of these point dose measurements show closer agreement with theMonte Carlo data than with the doses calculated by the treatment planning system. This study provides a simple demonstration of the geometric and dosimetric accuracy ofMonte Carlo simulations based on information from MCDTK.
Numerical and experimental studies of cold-formed steel floor systems under standard fire conditions
Resumo:
Light gauge cold-formed steel frame (LSF) structures are increasingly used in industrial, commercial and residential buildings because of their non-combustibility, dimensional stability, and ease of installation. A floor-ceiling system is an example of its applications. LSF floor-ceiling systems must be designed to serve as fire compartment boundaries and provide adequate fire resistance. Fire rated floor-ceiling assemblies formed with new materials and construction methodologies have been increasingly used in buildings. However, limited research has been undertaken in the past and hence a thorough understanding of their fire resistance behaviour is not available. Recently a new composite panel in which an external insulation layer is used between two plasterboards has been developed at QUT to provide a higher fire rating to LSF floors under standard fire conditions. But its increased fire rating could not be determined using the currently available design methods. Research on LSF floor systems under fire conditions is relatively recent and the behaviour of floor joists and other components in the systems is not fully understood. The present design methods thus require the use of expensive fire protection materials to protect them from excessive heat increase during a fire. This leads to uneconomical and conservative designs. Fire rating of these floor systems is provided simply by adding more plasterboard sheets to the steel joists and such an approach is totally inefficient. Hence a detailed fire research study was undertaken into the structural and thermal performance of LSF floor systems including those protected by the new composite panel system using full scale fire tests and extensive numerical studies. Experimental study included both the conventional and the new steel floor-ceiling systems under structural and fire loads using a gas furnace designed to deliver heat in accordance with the standard time- temperature curve in AS 1530.4 (SA, 2005). Fire tests included the behavioural and deflection characteristics of LSF floor joists until failure as well as related time-temperature measurements across the section and along the length of all the specimens. Full scale fire tests have shown that the structural and thermal performance of externally insulated LSF floor system was superior than traditional LSF floors with or without cavity insulation. Therefore this research recommends the use of the new composite panel system for cold-formed LSF floor-ceiling systems. The numerical analyses of LSF floor joists were undertaken using the finite element program ABAQUS based on the measured time-temperature profiles obtained from fire tests under both steady state and transient state conditions. Mechanical properties at elevated temperatures were considered based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). Finite element models were calibrated using the full scale test results and used to further provide a detailed understanding of the structural fire behaviour of the LSF floor-ceiling systems. The models also confirmed the superior performance of the new composite panel system. The validated model was then used in a detailed parametric study. Fire tests and the numerical studies showed that plasterboards provided sufficient lateral restraint to LSF floor joists until their failure. Hence only the section moment capacity of LSF floor joists subjected to local buckling effects was considered in this research. To predict the section moment capacity at elevated temperatures, the effective section modulus of joists at ambient temperature is generally considered adequate. However, this research has shown that it leads to considerable over- estimation of the local buckling capacity of joist subject to non-uniform temperature distributions under fire conditions. Therefore new simplified fire design rules were proposed for LSF floor joist to determine the section moment capacity at elevated temperature based on AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The accuracy of the proposed fire design rules was verified with finite element analysis results. A spread sheet based design tool was also developed based on these design rules to predict the failure load ratio versus time, moment capacity versus time and temperature for various LSF floor configurations. Idealised time-temperature profiles of LSF floor joists were developed based on fire test measurements. They were used in the detailed parametric study to fully understand the structural and fire behaviour of LSF floor panels. Simple design rules were also proposed to predict both critical average joist temperatures and failure times (fire rating) of LSF floor systems with various floor configurations and structural parameters under any given load ratio. Findings from this research have led to a comprehensive understanding of the structural and fire behaviour of LSF floor systems including those protected by the new composite panel, and simple design methods. These design rules were proposed within the guidelines of the Australian/New Zealand, American and European cold- formed steel structures standard codes of practice. These may also lead to further improvements to fire resistance through suitable modifications to the current composite panel system.