70 resultados para the scanning reference electrode technique

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indium tin-oxide (ITO) and polycrystalline boron-doped diamond (BDD) have been examined in detail using the scanning electrochemical microscopy technique in feedback mode. For the interrogation of electrodes made from these materials, the choice of mediator has been varied. Using Ru(CN) 4− 6 (aq), ferrocene methanol (FcMeOH), Fe(CN) 3− 6 (aq) and Ru(NH 3) 3+ 6 (aq), approach curve experiments have been performed, and for purposes of comparison, calculations of the apparent heterogeneous electron transfer rates (k app) have been made using these data. In general, it would appear that values of k app are affected mainly by the position of the mediator reversible potential relative to the relevant semiconductor band edge (associated with majority carriers). For both the ITO (n type) and BDD (p type) electrodes, charge transfer is impeded and values are very low when using FcMeOH and Fe(CN) 3− 6 (aq) as mediators, and the use of Ru(NH 3) 3+ 6(aq) results in the largest value of k app. With ITO, the surface is chemically homogeneous and no variation is observed for any given mediator. Data is also presented where the potential of the ITO electrode is fixed using a ratio of the mediators Fe(CN) 3− 6(aq) and Fe(CN) 4− 6(aq). In stark contrast, the BDD electrode is quite the opposite and a range of k app values are observed for all mediators depending on the position on the surface. Both electrode surfaces are very flat and very smooth, and hence, for BDD, variations in feedback current imply a variation in the electrochemical activity. A comparison of the feedback current where the substrate is biased and unbiased shows a surprising degree of proportionality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electrodeposition of copper onto copper, gold, palladium and glassy carbon (GC) electrodes via a hydrogen bubble templating method is reported. It is found that the composition of the underlying electrode material significantly influences the morphology of the copper electrodeposit. Highly ordered porous structures are achieved with Cu and Au electrodes, however on Pd this order is disrupted and a rough randomly oriented surface is formed whereas on GC a bubble templating effect is not observed. Chronopotentiograms recorded during the electrodeposition process allows bubble formation and detachment from the surface to be monitored where distinctly different potential versus time profiles are observed at the different electrodes. The porous Cu surfaces are characterised with scanning electron microscopy, X-ray diffraction and cyclic voltammetric measurements recorded under alkaline conditions. The latter demonstrates that there are active sites present on electrodeposited copper whose coverage and reactivity depend on the underlying electrode material. The most active Cu surface is achieved at a Pd substrate for both the hydrogen evolution reaction and the catalytic reduction of ferricyanide ions with thiosulphate ions. This demonstrates that the highly ordered porous structure on the micron scale which typifies the morphology that can be achieved with the hydrogen bubbling template method is not required in producing the most effective material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The technique of femoral cement-in-cement revision is well established, but there are no previous series reporting its use on the acetabular side at the time of revision total hip arthroplasty. We describe the surgical technique and report the outcome of 60 consecutive cement-in-cement revisions of the acetabular component at a mean follow-up of 8.5 years (range 5-12 years). All had a radiologically and clinically well fixed acetabular cement mantle at the time of revision. 29 patients died. No case was lost to follow-up. The 2 most common indications for acetabular revision were recurrent dislocation (77%) and to compliment a femoral revision (20%). There were 2 cases of aseptic cup loosening (3.3%) requiring re-revision. No other hip was clinically or radiologically loose (96.7%) at latest follow-up. One case was re-revised for infection, 4 for recurrent dislocation and 1 for disarticulation of a constrained component. At 5 years, the Kaplan-Meier survival rate was 100% for aseptic loosening and 92.2% (95% CI; 84.8-99.6%) with revision for all causes as the endpoint. These results support the use of the cement-in-cement revision technique in appropriate cases on the acetabular side. Theoretical advantages include preservation of bone stock, reduced operating time, reduced risk of complications and durable fixation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Course evaluations are now a serious matter for universities trying to meet stakeholder needs and expectations, quality assurance, improvements and strategic decision making. Typically, students are invited to participate in surveys on how well the design and delivery aspects meet predetermined learning objectives, quality of teaching, and the types of improvements needed for future deliveries. We used the Most Significant Change technique to gather data on the impact of a leadership course on 18 Pacific Islanders who completed a Master of Education (Educational Leadership). Participants' views highlighted impacts that were of significance to the students and their workplaces. The findings demonstrate that the Most Significant Change technique offers a more comprehensive understanding of the impact of leadership development courses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the extent of works done on modelling port water collisions, not much research effort has been devoted to modelling collisions at port anchorages. This paper aims to fill this important gap in literature by applying the Navigation Traffic Conflict Technique (NTCT) for measuring the collision potentials in anchorages and for examining the factors contributing to collisions. Grounding on the principles of the NTCT, a collision potential measurement model and a collision potential prediction model were developed. These models were illustrated by using vessel movement data of the anchorages in Singapore port waters. Results showed that the measured collision potentials are in close agreement with those perceived by harbour pilots. Higher collision potentials were found in anchorages attached to shoreline and international fairways, but not at those attached to confined water. Higher operating speeds, larger numbers of isolated danger marks and day conditions were associated with reduction in the collision potentials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel, highly selective resonance light scattering (RLS) method was researched and developed for the analysis of phenol in different types of industrial water. An important aspect of the method involved the use of graphene quantum dots (GQDs), which were initially obtained from the pyrolysis of citric acid dissolved in aqueous solutions. The GQDs in the presence of horseradish peroxidase (HRP) and H2O2 were found to react quantitatively with phenol such that the RLS spectral band (310 nm) was quantitatively enhanced as a consequence of the interaction between the GQDs and the quinone formed in the above reaction. It was demonstrated that the novel analytical method had better selectivity and sensitivity for the determination of phenol in water as compared to other analytical methods found in the literature. Thus, trace amounts of phenol were detected over the linear ranges of 6.00×10−8–2.16×10−6 M and 2.40×10−6–2.88×10−5 M with a detection limit of 2.20×10−8 M. In addition, three different spiked waste water samples and two untreated lake water samples were analysed for phenol. Satisfactory results were obtained with the use of the novel, sensitive and rapid RLS method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple one-step electrodeposition method was used to construct a glassy carbon electrode (GCE), which has been modified with Cu doped gold nanoparticles (GNPs), i.e. a Cu@AuNPs/GCE. This electrode was characterized with the use of scanning electron microscopy (SEM) and X-ray diffraction (XRD) techniques. The eugenol was electrocatalytically oxidized at the Cu@AuNPs/GCE. At this electrode, in comparison with the behavior at the GCE alone, the corresponding oxidation peak current was enhanced and the shift of the oxidation potentials to lower values was observed. Electrochemical behavior of eugenol at the Cu@AuNPs/GCE was investigated with the use of the cyclic voltammetry (CV) technique, and additionally, in order to confirm the electrochemical reaction mechanism for o-methoxy phenols, CVs for catechol, guaiacol and vanillin were investigated consecutively. Based on this work, an electrochemical reaction mechanism for o-methoxy phenols was suggested, and in addition, the above Cu@AuNPs/GCE was successfully employed for the analysis of eugenol in food samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For decades, marketing and marketing research have been based on a concept of consumer behaviour that is deeply embedded in a linear notion of marketing activities. With increasing regularity, key organising frameworks for marketing and marketing activities are being challenged by academics and practitioners alike. In turn, this has led to the search for new approaches and tools that will help marketers understand the interaction among attitudes, emotions and product/brand choice. More recently, the approach developed by Harvard Professor, Gerald Zaltman, referred to as the Zaltman Metaphor Elicitation Technique (ZMET) has gained considerable interest. This paper seeks to demonstrate the effectiveness of this alternative qualitative method, using a non-conventional approach, thus providing a useful contribution to the qualitative research area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The favourable scaffold for bone tissue engineering should have desired characteristic features, such as adequate mechanical strength and three-dimensional open porosity, which guarantee a suitable environment for tissue regeneration. In fact, the design of such complex structures like bone scaffolds is a challenge for investigators. One of the aims is to achieve the best possible mechanical strength-degradation rate ratio. In this paper we attempt to use numerical modelling to evaluate material properties for designing bone tissue engineering scaffold fabricated via the fused deposition modelling technique. For our studies the standard genetic algorithm was used, which is an efficient method of discrete optimization. For the fused deposition modelling scaffold, each individual strut is scrutinized for its role in the architecture and structural support it provides for the scaffold, and its contribution to the overall scaffold was studied. The goal of the study was to create a numerical tool that could help to acquire the desired behaviour of tissue engineered scaffolds and our results showed that this could be achieved efficiently by using different materials for individual struts. To represent a great number of ways in which scaffold mechanical function loss could proceed, the exemplary set of different desirable scaffold stiffness loss function was chosen. © 2012 John Wiley & Sons, Ltd.