924 resultados para PARTITION-COEFFICIENT
Resumo:
Introduction: Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (Nmm−1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived froma single 2D radiographic image. Methods: 18 excised human femora had previously been quantitative computed tomography scanned, from which 2D BMD-equivalent radiographic images were derived, and mechanically tested to failure in a stance-loading configuration. A 3D proximal femur shape was generated from each 2D radiographic image and used to construct 3D-FEA models. Results: The coefficient of determination (R2%) to predict failure load was 54.5% for BMD and 80.4% for 3D-FEXI. Conclusions: This ex vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD. This approach may be readily extended to routine clinical BMD images derived by dual energy X-ray absorptiometry. Crown Copyright © 2009 Published by Elsevier Ltd on behalf of IPEM. All rights reserved
Resumo:
Objective: To examine the reliability and validity of the Alcohol Use Disorders Identification Test (AUDIT) compared to a structured diagnostic interview, the Composite international Diagnostic Interview (CIDI; 12-month version) in psychiatric patients with a diagnosis of schizophrenia. Method: Patients (N = 71, 53 men) were interviewed using the CIDI (Alcohol Misuse Section; 12-month version) and then completed the AUDIT. Results: The CIDI identified 32.4% of the sample as having an alcohol use disorder. Of these, 5 (7.0%) met diagnostic criteria for harmful use of alcohol, 1 (1.4%) met diagnostic criteria for alcohol abuse and 17 (23.9%) met diagnostic criteria for alcohol dependence. The AUDIT was found to have good internal reliability (coefficient = 0.85). An AUDIT cutoff of greater than or equal to 8 had a sensitivity of 87% and specificity of 90% in detecting CIDI-diagnosed alcohol disorders. All items except Item 9 contributed significantly to discriminant validity. Conclusions: The findings replicate and extend previous findings of high rates of alcohol use disorders in people with severe mental illness. The AUDIT was found to be reliable and valid in this sample and can be used with confidence as a screening instrument for alcohol use disorders in people with schizophrenia.
Resumo:
Purpose: To determine the subbasal nerve density and tortuosity at 5 corneal locations and to investigate whether these microstructural observations correlate with corneal sensitivity. Method: Sixty eyes of 60 normal human subjects were recruited into 1 of 3 age groups, group 1: aged ,35 years, group 2: aged 35–50 years, and group 3: aged .50 years. All eyes were examined using slit-lamp biomicroscopy, noncontact corneal esthesiometry, and slit scanning in vivo confocal microscopy. Results: The mean subbasal nerve density and the mean corneal sensitivity were greatest centrally (14,731 6 6056 mm/mm2 and 0.38 6 0.21 millibars, respectively) and lowest in the nasal mid periphery (7850 6 4947 mm/mm2 and 0.49 6 0.25 millibars, respectively). The mean subbasal nerve tortuosity coefficient was greatest in the temporal mid periphery (27.3 6 6.4) and lowest in the superior mid periphery (19.3 6 14.1). There was no significant difference in mean total subbasal nerve density between age groups. However, corneal sensation (P = 0.001) and subbasal nerve tortuosity (P = 0.004) demonstrated significant differences between age groups. Subbasal nerve density only showed significant correlations with corneal sensitivity threshold in the temporal cornea and with subbasal nerve tortuosity in the inferior and nasal cornea. However, these correlations were weak. Conclusions: This study quantitatively analyzes living human corneal nerve structure and an aspect of nerve function. There is no strong correlation between subbasal nerve density and corneal sensation. This study provides useful baseline data for the normal living human cornea at central and mid-peripheral locations
Resumo:
Purpose To assess the repeatability and validity of lens densitometry derived from the Pentacam Scheimpflug imaging system. Setting Eye Clinic, Queensland University of Technology, Brisbane, Australia. Methods This prospective cross-sectional study evaluated 1 eye of subjects with or without cataract. Scheimpflug measurements and slitlamp and retroillumination photographs were taken through a dilated pupil. Lenses were graded with the Lens Opacities Classification System III. Intraobserver and interobserver reliability of 3 observers performing 3 repeated Scheimpflug lens densitometry measurements each was assessed. Three lens densitometry metrics were evaluated: linear, for which a line was drawn through the visual axis and a mean lens densitometry value given; peak, which is the point at which lens densitometry is greatest on the densitogram; 3-dimensional (3D), in which a fixed, circular 3.0 mm area of the lens is selected and a mean lens densitometry value given. Bland and Altman analysis of repeatability for multiple measures was applied; results were reported as the repeatability coefficient and relative repeatability (RR). Results Twenty eyes were evaluated. Repeatability was high. Overall, interobserver repeatability was marginally lower than intraobserver repeatability. The peak was the least reliable metric (RR 37.31%) and 3D, the most reliable (RR 5.88%). Intraobserver and interobserver lens densitometry values in the cataract group were slightly less repeatable than in the noncataract group. Conclusion The intraobserver and interobserver repeatability of Scheimpflug lens densitometry was high in eyes with cataract and eyes without cataract, which supports the use of automated lens density scoring using the Scheimpflug system evaluated in the study
Resumo:
Braking or traction torque is regarded as an important source of wheelset skid and a potential source of derailment risk that adversely affects the safety levels of train operations; therefore, this research examines the effect of braking/traction torque to the longitudinal and lateral dynamics of wagons. This paper reports how train operations safety could be adversely affected due to various braking strategies. Sensitivity of wagon dynamics to braking severity is illustrated through numerical examples. The influence of wheel/rail interface friction coefficient and the effects of two types of track geometry defects on wheel unloading ratio and wagon pitch are also discussed in the paper.
Resumo:
This paper presents the details of experimental and numerical studies on the shear behaviour of a recently developed, cold-formed steel beam known as LiteSteel Beam (LSB). The LSB sections are produced by a patented manufacturing process involving simultaneous cold-forming and electric resistance welding. It has a unique shape of a channel beam with two rectangular hollow flanges. Recent research has demonstrated the presence of increased shear capacity of LSBs due to the additional fixity along the web to flange juncture, but the current design rules ignore this effect. Therefore they were modified by including a higher elastic shear buckling coefficient. In the present study, the ultimate shear capacity results obtained from the experimental and numerical studies of 10 different LSB sections were compared with the modified shear capacity design rules. It was found that they are still conservative as they ignore the presence of post-buckling strength. Therefore the design rules were further modified to include the available post-buckling strength. Suitable design rules were also developed under the direct strength method format. This paper presents the details of this study and the results including the final design rules for the shear capacity of LSBs.
Resumo:
Objectives. To evaluate the performance of the dynamic-area high-speed videokeratoscopy technique in the assessment of tear film surface quality with and without the presence of soft contact lenses on eye. Methods. Retrospective data from a tear film study using basic high-speed videokeratoscopy, captured at 25 frames per second, (Kopf et al., 2008, J Optom) were used. Eleven subjects had tear film analysis conducted in the morning, midday and evening on the first and seventh day of one week of no lens wear. Five of the eleven subjects then completed an extra week of hydrogel lens wear followed by a week of silicone hydrogel lens wear. Analysis was performed on a 6 second period of the inter-blink recording. The dynamic-area high-speed videokeratoscopy technique uses the maximum available area of Placido ring pattern reflected from the tear interface and eliminates regions of disturbance due to shadows from the eyelashes. A value of tear film surface quality was derived using image rocessing techniques, based on the quality of the reflected ring pattern orientation. Results. The group mean tear film surface quality and the standard deviations for each of the conditions (bare eye, hydrogel lens, and silicone hydrogel lens) showed a much lower coefficient of variation than previous methods (average reduction of about 92%). Bare eye measurements from the right and left eyes of eleven individuals showed high correlation values (Pearson’s correlation r = 0.73, p < 0.05). Repeated measures ANOVA across the 6 second period of measurement in the normal inter-blink period for the bare eye condition showed no statistically significant changes. However, across the 6 second inter-blink period with both contact lenses, statistically significant changes were observed (p < 0.001) for both types of contact lens material. Overall, wearing hydrogel and silicone hydrogel lenses caused the tear film surface quality to worsen compared with the bare eye condition (repeated measures ANOVA, p < 0.0001 for both hydrogel and silicone hydrogel). Conclusions. The results suggest that the dynamic-area method of high-speed videokeratoscopy was able to distinguish and quantify the subtle, but systematic worsening of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions.
Resumo:
A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.
Resumo:
Visiting a modern shopping center is becoming vital in our society nowadays. The fast growth of shopping center, transportation system, and modern vehicles has given more choices for consumers in shopping. Although there are many reasons for the consumers in visiting the shopping center, the influence of travel time and size of shopping center are important things to be considered towards the frequencies of visiting customers in shopping centers. A survey to the customers of three major shopping centers in Surabaya has been conducted to evaluate the Ellwood’s model and Huff’s model. A new exponent value N of 0.48 and n of 0.50 has been found from the Ellwood’s model, while a coefficient of 0.267 and an add value of 0.245 have been found from the Huff’s model.
Resumo:
To investigate whether venous occlusion plethysmography (VOP) may be used to measure high rates of arterial inflow associated with exercise, venous occlusions were performed at rest, and following dynamic handgrip exercise at 15, 30, 45, and 60 % of maximum voluntary contraction (MVC) in seven healthy males. The effect of including more than one cardiac cycle in the calculation of blood flow was assessed by comparing the cumulative blood flow over one, two, three, or four cardiac cycles. The inclusion of more than one cardiac cycle at 30 and 60 % MVC, and more than two cardiac cycles at 15 and 45 % MVC resulted in a lower blood flow compared to using only the first cardiac cycle (P < 0.05). Despite the small time interval over which arterial inflow was measured (~1 second), this did not affect the reproducibility of the technique. Reproducibility (coefficient of variation for arterial inflow over three trials) tended to be poorer at the higher workloads, although this was not significant (12.7 ± 6.6 %, 16.2 ± 7.3 %, and 22.9 ± 9.9 % for the 15, 30, and 45 % MVC workloads; P=0.102). There was also a tendency for greater reproducibility with the inclusion of more cardiac cycles at the highest workload, but this did not reach significance (P=0.070). In conclusion, when calculated over the first cardiac cycle only during venous occlusion, high rates of FBF can be measured using VOP, and this can be achieved without a significant decrease in the reproducibility of the measurement.
Resumo:
Secondary tasks such as cell phone calls or interaction with automated speech dialog systems (SDSs) increase the driver’s cognitive load as well as the probability of driving errors. This study analyzes speech production variations due to cognitive load and emotional state of drivers in real driving conditions. Speech samples were acquired from 24 female and 17 male subjects (approximately 8.5 h of data) while talking to a co-driver and communicating with two automated call centers, with emotional states (neutral, negative) and the number of necessary SDS query repetitions also labeled. A consistent shift in a number of speech production parameters (pitch, first format center frequency, spectral center of gravity, spectral energy spread, and duration of voiced segments) was observed when comparing SDS interaction against co-driver interaction; further increases were observed when considering negative emotion segments and the number of requested SDS query repetitions. A mel frequency cepstral coefficient based Gaussian mixture classifier trained on 10 male and 10 female sessions provided 91% accuracy in the open test set task of distinguishing co-driver interactions from SDS interactions, suggesting—together with the acoustic analysis—that it is possible to monitor the level of driver distraction directly from their speech.
Resumo:
Two series of novel ruthenium bipyridyl dyes incorporating sulfur-donor bidentate ligands with general formula \[Ru(R-bpy)2C2N2S2] and \[Ru(R-bpy)2(S2COEt)]\[NO3] (where R =H, CO2Et, CO2H; C2N2S2 = cyanodithioimidocarbonate and S2COEt = ethyl xanthogenate) have been synthesized and characterized spectroscopically, electrochemically and computationally. The acid derivatives in both series (C2N2S2 3 and S2COEt 6) were used as a photosensitizer in a dye-sensitized solar cell (DSSC) and the incident photo-to-current conversion efficiency (IPCE), overall efficiency (_) and kinetics of the dye/TiO2 system were investigated. It was found that 6 gave a higher efficiency cell than 3 despite the latter dye’s more favorable electronic properties, such as greater absorption range, higher molar extinction coefficient and large degree of delocalization of the HOMO. The transient absorption spectroscopy studies revealed that the recombination kinetics of 3 were unexpectedly fast, which was attributed to the terminal CN on the ligand binding to the TiO2, as evidenced by an absorption study of R =H and CO2Et dyes sensitized on TiO2, and hence leading to a lower efficiency DSSC.
Resumo:
Dynamic load sharing can be defined as a measure of the ability of a heavy vehicle multi-axle group to equalise load across its wheels under typical travel conditions; i.e. in the dynamic sense at typical travel speeds and operating conditions of that vehicle. Various attempts have been made to quantify the ability of heavy vehicles to equalise the load across their wheels during travel. One of these was the concept of the load sharing coefficient (LSC). Other metrics such as the dynamic load coefficient (DLC) have been used to compare one heavy vehicle suspension with another for potential road damage. This paper compares these metrics and determines a relationship between DLC and LSC with sensitivity analysis of this relationship. The shortcomings of these presently-available metrics are discussed with a new metric proposed - the dynamic load equalisation (DLE) measure.
Resumo:
The literature identifies several models that describe inter-phase mass transfer, key to the emission process. While the emission process is complex and these models may be more or less successful at predicting mass transfer rates, they identify three key variables for a system involving a liquid and an air phase in contact with it: • A concentration (or partial pressure) gradient driving force; • The fluid dynamic characteristics within the liquid and air phases, and • The chemical properties of the individual components within the system. In three applied research projects conducted prior to this study, samples collected with two well-known sampling devices resulted in very different odour emission rates. It was not possible to adequately explain the differences observed. It appeared likely, however, that the sample collection device might have artefact effects on the emission of odorants, i.e. the sampling device appeared to have altered the mass transfer process. This raised the obvious question: Where two different emission rates are reported for a single source (differing only in the selection of sampling device), and a credible explanation for the difference in emission rate cannot be provided, which emission rate is correct? This research project aimed to identify the factors that determine odour emission rates, the impact that the characteristics of a sampling device may exert on the key mass transfer variables, and ultimately, the impact of the sampling device on the emission rate itself. To meet these objectives, a series of targeted reviews, and laboratory and field investigations, were conducted. Two widely-used, representative devices were chosen to investigate the influence of various parameters on the emission process. These investigations provided insight into the odour emission process generally, and the influence of the sampling device specifically.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.