313 resultados para Experimental Cinema
Resumo:
The use of immobilised TiO2 for the purification of polluted water streams introduces the necessity to evaluate the effect of mechanisms such as the transport of pollutants from the bulk of the liquid to the catalyst surface and the transport phenomena inside the porous film. Experimental results of the effects of film thickness on the observed reaction rate for both liquid-side and support-side illumination are here compared with the predictions of a one-dimensional mathematical model of the porous photocatalytic slab. Good agreement was observed between the experimentally obtained photodegradation of phenol and its by-products, and the corresponding model predictions. The results have confirmed that an optimal catalyst thickness exists and, for the films employed here, is 5 μm. Furthermore, the modelling results have highlighted the fact that porosity, together with the intrinsic reaction kinetics are the parameters controlling the photocatalytic activity of the film. The former by influencing transport phenomena and light absorption characteristics, the latter by naturally dictating the rate of reaction.
Resumo:
Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.
Resumo:
Abstract An experimental dataset representing a typical flow field in a stormwater gross pollutant trap (GPT) was visualised. A technique was developed to apply the image-based flow visualisation (IBFV) algorithm to the raw dataset. Particle image velocimetry (PIV) software was previously used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding stormwater pollutant capture and retention behaviour within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate the possible flow paths of pollutants entering the GPT. The investigated flow paths were compared with the behaviour of pollutants monitored during experiments.
Resumo:
Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.
Resumo:
This paper presents the details of an experimental study of a cold-formed steel hollow flange channel beam known as LiteSteel Beam (LSB) subject to combined bending and shear actions. The LSB sections are produced by a patented manufacturing process involving simultaneous cold-forming and electric resistance welding. Due to the geometry of the LSB, as well as its unique residual stress characteristics and initial geometric imperfections resultant of manufacturing processes, much of the existing research for common cold-formed steel sections is not directly applicable to LSB. Experimental and numerical studies have been carried out to evaluate the behaviour and design of LSBs subject to pure bending actions and predominant shear actions. To date, however, no investigation has been conducted into the strength of LSB sections under combined bending and shear actions. Combined bending and shear is especially prevalent at the supports of continuous span and cantilever beams, where the interaction of high shear force and bending moment can reduce the capacity of a section to well below that for the same section subject only to pure shear or moment. Hence experimental studies were conducted to assess the combined bending and shear behaviour and strengths of LSBs. Eighteen tests were conducted and the results were compared with current AS/NZS 4600 and AS 4100 design rules. AS/NZS 4600 design rules were shown to grossly underestimate the combined bending and shear capacities of LSBs and hence two lower bound design equations were proposed based on experimental results. Use of these equations will significantly improve the confidence and cost-effectiveness of designing LSBs for combined bending and shear actions.
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. Made by Motion is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generate experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN – Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In my presentation I will be displaying and discussing a selected creative works from the project along with the process and considerations behind the work.
Resumo:
The complex [1,2-bis(di-tert-butylphosphanyl)ethane-[kappa]2P,P']diiodidonickel(II), [NiI2(C18H40P2] or (dtbpe-[kappa]2P)NiI2, [dtbpe is 1,2-bis(di-tert-butylphosphanyl)ethane], is bright blue-green in the solid state and in solution, but, contrary to the structure predicted for a blue or green nickel(II) bis(phosphine) complex, it is found to be close to square planar in the solid state. The solution structure is deduced to be similar, because the optical spectra measured in solution and in the solid state contain similar absorptions. In solution at room temperature, no 31P{1H} NMR resonance is observed, but the very small solid-state magnetic moment at temperatures down to 4 K indicates that the weak paramagnetism of this nickel(II) complex can be ascribed to temperature independent paramagnetism, and that the complex has no unpaired electrons. The red [1,2-bis(di-tert-butylphosphanyl)ethane-[kappa]2P,P']dichloridonickel(II), [NiCl2(C18H40P2] or (dtbpe-[kappa]2P)NiCl2, is very close to square planar and very weakly paramagnetic in the solid state and in solution, while the maroon [1,2-bis(di-tert-butylphosphanyl)ethane-[kappa]2P,P']dibromidonickel(II), [NiBr2(C18H40P2] or (dtbpe-[kappa]2P)NiBr2, is isostructural with the diiodide in the solid state, and displays paramagnetism intermediate between that of the dichloride and the diiodide in the solid state and in solution. Density functional calculations demonstrate that distortion from an ideal square plane for these complexes occurs on a flat potential energy surface. The calculations reproduce the observed structures and colours, and explain the trends observed for these and similar complexes. Although theoretical investigation identified magnetic-dipole-allowed excitations that are characteristic for temperature-independent paramagnetism (TIP), theory predicts the molecules to be diamagnetic.
Resumo:
We compare the consistency of choices in two methods used to elicit risk preferences on an aggregate as well as on an individual level. We ask subjects to choose twice from a list of nine decisions between two lotteries, as introduced by Holt and Laury (2002, 2005) alternating with nine decisions using the budget approach introduced by Andreoni and Harbaugh (2009). We find that, while on an aggregate (subject pool) level the results are consistent, on an individual (within-subject) level, behaviour is far from consistent. Within each method as well as across methods we observe low (simple and rank) correlations.
Resumo:
Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.
Resumo:
Building on and bringing up to date the material presented in the first installment of Directory of World Cinema : Australia and New Zealand, this volume continues the exploration of the cinema produced in Australia and New Zealand since the beginning of the twentieth century. Among the additions to this volume are in-depth treatments of the locations that feature prominently in the countries' cinema. Essays by leading critics and film scholars consider the significance in films of the outback and the beach, which is evoked as a liminal space in Long Weekend and a symbol of death in Heaven's Burning, among other films. Other contributions turn the spotlight on previously unexplored genres and key filmmakers, including Jane Campion, Rolf de Heer, Charles Chauvel, and Gillian Armstrong.
Resumo:
To investigate the correlation between postmenopausal osteoporosis (PMO) and the pathogenesis of periodontitis, ovariectomized rats were generated and the experimental periodontitis was induced using a silk ligature. The inflammatory factors and bone metabolic markers were measured in the serum and periodontal tissues of ovariectomized rats using an automatic chemistry analyzer, enzyme-linked immunosorbent assays, and immunohistochemistry. The bone mineral density of whole body, pelvis, and spine was analyzed using dual-energy X-ray absorptiometry and image analysis. All data were analyzed using SPSS 13.0 statistical software. It was found that ovariectomy could upregulate the expression of interleukin- (IL-)6, the receptor activator of nuclear factor-κB ligand (RANKL), and osteoprotegerin (OPG) and downregulate IL-10 expression in periodontal tissues, which resulted in progressive alveolar bone loss in experimental periodontitis. This study indicates that changes of cytokines and bone turnover markers in the periodontal tissues of ovariectomized rats contribute to the damage of periodontal tissues.
Resumo:
Independent filmmaking within the context of Australian cinema is a multifaceted subject. In comparison to the United States, where production can be characterised as bifurcated between major studio production and so-called “indie” or independent production without the backing of the majors, since the 1970s and until recently the vast majority of Australian feature film production has been independent filmmaking. Like most so-called national cinemas, most Australian movies are supported by both direct and indirect public subvention administered by state and federal government funding bodies, and it could be argued that filmmakers are, to a certain degree, “dependent” on official mandates. As this chapter demonstrates national production slates are subjected to budget restraints and cut-backs, official cultural policies (for example pursuing international co-productions and local content quotas) and shifts in policy directions among others. Therefore, within the context of Australian cinema, feature film production operating outside the public funding system could be understood as “independent”. However, as is the case for most English-language national cinemas, independence has long been defined in terms of autonomy from Hollywood, and – as alluded to above – as Australia becomes more dependent upon international inputs into production, higher budget movies are becoming less independent from Hollywood. As such, this chapter argues that independence in Australian cinema can be viewed as having two poles: independence from direct government funding and independence from Hollywood studios. With a specific focus on industry and policy contexts, this chapter explores key issues that constitute independence for Australian cinema. In so doing it examines the production characteristics of four primary domains of contemporary independent filmmaking in Australia, namely: “Aussiewood” production; government-backed low-to-mid budget production; co-productions; and guerrilla filmmaking.
Resumo:
Load bearing Light Gauge Steel Frame (LSF) walls are commonly made of conventional lipped channel sections and gypsum plasterboards. Recently, innovative steel sections such as hollow flange channel sections have been proposed as studs in LSF wall frames with a view to improve their fire resistance ratings. A series of full scale fire tests was then undertaken to investigate the fire performance of the new LSF wall systems under standard fire conditions. Test wall frames made of hollow flange section studs were lined with fire resistant gypsum plasterboards on both sides, and were subjected to increasing temperatures as given by the standard fire curve on one side. Both uninsulated and cavity insulated walls were tested with varying load ratios from 0.2 to 0.6. This paper presents the details of this experimental study on the fire performance of LSF walls and the results. Test results showed that the walls made of the new hollow flange channel section studs have a superior fire performance in comparison to that of lipped channel section stud walls. They also showed that the fire performance of cavity insulated walls was inferior to that of uninsulated walls. The reasons for this fire behaviour are described in this paper.
Resumo:
A fiber Bragg grating (FBG) accelerometer using transverse forces is more sensitive than one using axial forces with the same mass of the inertial object, because a barely stretched FBG fixed at its two ends is much more sensitive to transverse forces than axial ones. The spring-mass theory, with the assumption that the axial force changes little during the vibration, cannot accurately predict its sensitivity and resonant frequency in the gravitational direction because the assumption does not hold due to the fact that the FBG is barely prestretched. It was modified but still required experimental verification due to the limitations in the original experiments, such as the (1) friction between the inertial object and shell; (2) errors involved in estimating the time-domain records; (3) limited data; and (4) large interval ∼5 Hz between the tested frequencies in the frequency-response experiments. The experiments presented here have verified the modified theory by overcoming those limitations. On the frequency responses, it is observed that the optimal condition for simultaneously achieving high sensitivity and resonant frequency is at the infinitesimal prestretch. On the sensitivity at the same frequency, the experimental sensitivities of the FBG accelerometer with a 5.71 gram inertial object at 6 Hz (1.29, 1.19, 0.88, 0.64, and 0.31 nm/g at the 0.03, 0.69, 1.41, 1.93, and 3.16 nm prestretches, respectively) agree with the static sensitivities predicted (1.25, 1.14, 0.83, 0.61, and 0.29 nm/g, correspondingly). On the resonant frequency, (1) its assumption that the resonant frequencies in the forced and free vibrations are similar is experimentally verified; (2) its dependence on the distance between the FBG’s fixed ends is examined, showing it to be independent; (3) the predictions of the spring-mass theory and modified theory are compared with the experimental results, showing that the modified theory predicts more accurately. The modified theory can be used more confidently in guiding its design by predicting its static sensitivity and resonant frequency, and may have applications in other fields for the scenario where the spring-mass theory fails.