196 resultados para field experiment
Resumo:
This Chapter introduces Compass Points: The Landscapes, Locations and Coordinates of Identities in Contemporary Performance Making, a volume which collects papers from the Australasian Association for Theatre, Drama and Performance Studies (ADSA) Conference 2012.
Resumo:
We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.
Resumo:
An ongoing challenge in behavioral economics is to understand the variations observed in risk attitudes as a function of their environmental context. Of particular interest is the effect of wealth on risk attitudes. The research in this area has however faced two constraints: the difficulty to study the causal effects of large changes in wealth, and the causal effects of losses on risk behavior. The present paper address this double limitation by providing evidence of the variation of risk attitude after large losses using a natural disaster (Brisbane floods) as the setting for a natural experiment.
Resumo:
In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.
Resumo:
Filtration using granular media such as quarried sand, anthracite and granular activated carbon is a well-known technique used in both water and wastewater treatment. A relatively new prefiltration method called pebble matrix filtration (PMF) technology has been proved effective in treating high turbidity water during heavy rain periods that occur in many parts of the world. Sand and pebbles are the principal filter media used in PMF laboratory and pilot field trials conducted in the UK, Papua New Guinea and Serbia. However during first full-scale trials at a water treatment plant in Sri Lanka in 2008, problems were encountered in sourcing the required uniform size and shape of pebbles due to cost, scarcity and Government regulations on pebble dredging. As an alternative to pebbles, hand-made clay pebbles (balls) were fired in a kiln and their performance evaluated for the sustainability of the PMF system. These clay balls within a filter bed are subjected to stresses due to self-weight and overburden, therefore, it is important that clay balls should be able to withstand these stresses in water saturated conditions. In this paper, experimentally determined physical properties including compression failure load (Uniaxial Compressive Strength) and tensile strength at failure (theoretical) of hand-made clay balls are described. Hand-made clay balls fired between the kiln temperatures of 875oC to 960oC gave failure loads of between 3.0 kN and 7.1 kN. In another test when clay balls were fired to 1250oC the failure load was 35.0 kN compared to natural Scottish cobbles with an average failure load of 29.5 kN. The uniaxial compressive strength of clay balls obtained by experiment has been presented in terms of the tensile yield stress of clay balls. Based on the effective stress principle in soil mechanics, a method for the estimation of maximum theoretical load on clay balls used as filter media is proposed and compared with experimental failure loads.
Resumo:
This paper illustrates a field research performed with a team of experts involved in the evaluation of Trippple, a system aimed at supporting the different phases of a tourist trip, in order to provide feedback and insights, both on the functionalities already implemented (that at the time of evaluation were available only as early and very unstable prototypes), and on the functionalities still to be implemented. We show how the involvement of professionals helped to focus on challenging aspects, instead of less important, cosmetic, issues and resulted profitable in terms of early feedback, issues spotted, and improvements suggested
Resumo:
To feel another person’s pulse is an intimate and physical interaction. In these prototypes we use near field communications to extend the tangible reach of our heart beat, so another person can feel our heart beat at a distance. The work is an initial experiment in near field haptic interaction, and is used to explore the quality of interactions resulting from feeling another persons pulse. The work takes the form of two feathered white gauntlets, to be worn on the fore arm. Each of the gauntlets contain a pulse sensor, radio transmitter and vibrator. The pulse of the wearer is transmitted to the other feathered gauntlet and transformed into haptic feedback. When there are two wearers, their heart beats are exchanged. To be felt by of each other without physical contact.
Resumo:
The problem of MHD natural convection boundary layer flow of an electrically conducting and optically dense gray viscous fluid along a heated vertical plate is analyzed in the presence of strong cross magnetic field with radiative heat transfer. In the analysis radiative heat flux is considered by adopting optically thick radiation limit. Attempt is made to obtain the solutions valid for liquid metals by taking Pr≪1. Boundary layer equations are transformed in to a convenient dimensionless form by using stream function formulation (SFF) and primitive variable formulation (PVF). Non-similar equations obtained from SFF are then simulated by implicit finite difference (Keller-box) method whereas parabolic partial differential equations obtained from PVF are integrated numerically by hiring direct finite difference method over the entire range of local Hartmann parameter, $xi$ . Further, asymptotic solutions are also obtained for large and small values of local Hartmann parameter $xi$ . A favorable agreement is found between the results for small, large and all values of $xi$ . Numerical results are also demonstrated graphically by showing the effect of various physical parameters on shear stress, rate of heat transfer, velocity and temperature.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
As a novel sensing element, fiber Bragg grating (FBG) is sensitive to both temperature and strain. Basing on this character, high sensitivity FBG temperature sensor can be made. However, as a result of the strain limit of the fiber, the temperature range it can endure is quite narrow. This drawback limits its application and complicates its storage and transport. We design and manufacture a FBG temperature sensor with tunable sensitivity. By tuning its sensitivity, its temperature range is changed, which enlarges its application field, solves the problem of storage and transport, and brighten the future of FBG in temperature measurement. In experiment, by changing the fixing position of the bimetal we tuned the sensitivity of the high sensitivity FBG sensor to different values (-47 pm/℃,-97.7 pm/℃,-153.3 pm/℃).
Resumo:
Background: Bicycle commuting in an urban environment of high air pollution is known as a potential health risk, especially for susceptible individuals. While risk management strategies aimed to reduce motorised traffic emissions exposure have been suggested, limited studies have assessed the utility of such strategies in real-world circumstances. Objectives: The potential of reducing exposure to ultrafine particles (UFP; < 0.1 µm) during bicycle commuting by lowering interaction with motorised traffic was investigated with real-time air pollution and acute inflammatory measurements in healthy individuals using their typical, and an alternative to their typical, bicycle commute route. Methods: Thirty-five healthy adults (mean ± SD: age = 39 ± 11 yr; 29% female) each completed two return trips of their typical route (HIGH) and a pre-determined altered route of lower interaction with motorised traffic (LOW; determined by the proportion of on-road cycle paths). Particle number concentration (PNC) and diameter (PD) were monitored in real-time in-commute. Acute inflammatory indices of respiratory symptom incidence, lung function and spontaneous sputum (for inflammatory cell analyses) were collected immediately pre-commute, and one and three hours post-commute. Results: LOW resulted in a significant reduction in mean PNC (1.91 x e4 ± 0.93 x e4 ppcc vs. 2.95 x e4 ± 1.50 x e4 ppcc; p ≤ 0.001). Besides incidence of in-commute offensive odour detection (42 vs. 56 %; p = 0.019), incidence of dust and soot observation (33 vs. 47 %; p = 0.038) and nasopharyngeal irritation (31 vs. 41 %; p = 0.007), acute inflammatory indices were not significantly associated to in-commute PNC, nor were these indices reduced with LOW compared to HIGH. Conclusions: Exposure to PNC, and the incidence of offensive odour and nasopharyngeal irritation, can be significantly reduced when utilising a strategy of lowering interaction with motorised traffic whilst bicycle commuting, which may bring important benefits for both healthy and susceptible individuals.
Resumo:
utomatic pain monitoring has the potential to greatly improve patient diagnosis and outcomes by providing a continuous objective measure. One of the most promising methods is to do this via automatically detecting facial expressions. However, current approaches have failed due to their inability to: 1) integrate the rigid and non-rigid head motion into a single feature representation, and 2) incorporate the salient temporal patterns into the classification stage. In this paper, we tackle the first problem by developing a “histogram of facial action units” representation using Active Appearance Model (AAM) face features, and then utilize a Hidden Conditional Random Field (HCRF) to overcome the second issue. We show that both of these methods improve the performance on the task of pain detection in sequence level compared to current state-of-the-art-methods on the UNBC-McMaster Shoulder Pain Archive.