139 resultados para Basophil Degranulation Test -- methods
Resumo:
The profiled steel roof and wall cladding systems in Australia are commonly made of very thin high tensile steels, and are crest-fixed with screw fasteners. A review of current literature and design standards indicated the need to improve the understanding of the behaviour of crest-fixed steel cladding systems under wind uplift/suction loading, in particular, the local failures. Therefore a detailed experimental study using a series of small scale tests and some two-span cladding tests was conducted to investigate the local pull-through and dimpling failures in the commonly used steel cladding systems. The applicability of the current design formulae for the pull-through strength of crest-fixed steel classing systems was investigated first. An improved design formula was then developed in terms of the thickness and ultimate tensile strenth of steel cladding material and diameter of screw head or washer. This paper presents the details of this investigation and its results. A review of current design and test methods is also included.
Resumo:
This paper reviews current design standards and test methods for blast-resistant glazing design and compares a typical design outcome with that from comprehensive finite-element (FE) analysis. Design standards are conservative and are limited to the design of relatively small glazed panels. Standard test methods are expensive, create environmental pollution, and can classify the hazard ratings of only smaller glazed panels. Here the design of a laminated glass (LG) panel is carried out according to an existing design standard, and then its performance is examined using comprehensive FE modeling and analysis. Finite-element results indicate that both glass panes crack, the interlayer yields with little damage, and the sealant joints do not fail for the designed blast load. This failure pattern satisfies some of the requirements for minimal hazard rating in the design standard. It is evident that interlayer thickness and material properties are important during the post-crack stage of an LG panel, but they are not accounted for in the design standards. The new information generated in this paper will contribute toward an enhanced blast design of LG panels.
Resumo:
This report provides an evaluation of the implementation of the Polluter Pays Principle (PPP) – a principle of international environmental law – in the context of pollution from sugarcane farming affecting Australia’s Great Barrier Reef (GBR). The research was part of an experiment to test methods for evaluating the effectiveness of environmental laws. Overall, we found that whilst the PPP is reflected to a limited extent in Australian law (more so in Queensland law, than at the national level), the behaviour one might expect in terms of implementing the principle was largely inadequate. Evidence of a longer term, explicit commitment to the PPP was particularly weak.
Resumo:
Rail track undergoes complex loading patterns under moving traffic conditions compared to roads due to its continued and discontinued multi-layered structure, including rail, sleepers, ballast layer, sub-ballast layer, and subgrade. Particle size distributions (PSDs) of ballast, subballast, and subgrade layers can be critical in cyclic plastic deformation of rail track under moving traffic on frequent track degradation of rail tracks, especially at bridge transition zones. Conventional test approaches: static shear and cyclic single-point load tests are however unable to replicate actual loading patterns of moving train. Multi-ring shear apparatus; a new type of torsional simple shear apparatus, which can reproduce moving traffic conditions, was used in this study to investigate influence of particle size distribution of rail track layers on cyclic plastic deformation. Three particle size distributions, using glass beads were examined under different loading patterns: cyclic sin-gle-point load, and cyclic moving wheel load to evaluate cyclic plastic deformation of rail track under different loading methods. The results of these tests suggest that particle size distributions of rail track structural layers have significant impacts on cyclic plastic deformation under moving train load. Further, the limitations in con-ventional test methods used in laboratories to estimate the plastic deformation of rail track materials lead to underestimate the plastic deformation of rail tracks.
Resumo:
These lecture notes describe the use and implementation of a framework in which mathematical as well as engineering optimisation problems can be analysed. The foundations of the framework and algorithms described -Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEAs) - lie upon traditional evolution strategies and incorporate the concepts of a multi-objective optimisation, hierarchical topology, asynchronous evaluation of candidate solutions , parallel computing and game strategies. In a step by step approach, the numerical implementation of EAs and HAPEAs for solving multi criteria optimisation problems is conducted providing the reader with the knowledge to reproduce these hand on training in his – her- academic or industrial environment.
Resumo:
Glass transition temperature of spaghetti sample was measured by thermal and rheological methods as a function of water content.
Resumo:
One of the new challenges in aeronautics is combining and accounting for multiple disciplines while considering uncertainties or variability in the design parameters or operating conditions. This paper describes a methodology for robust multidisciplinary design optimisation when there is uncertainty in the operating conditions. The methodology, which is based on canonical evolution algorithms, is enhanced by its coupling with an uncertainty analysis technique. The paper illustrates the use of this methodology on two practical test cases related to Unmanned Aerial Systems (UAS). These are the ideal candidates due to the multi-physics involved and the variability of missions to be performed. Results obtained from the optimisation show that the method is effective to find useful Pareto non-dominated solutions and demonstrate the use of robust design techniques.
Resumo:
Aims: This study investigated the effect of simulated visual impairment on the speed and accuracy of performance on a series of commonly used cognitive tests. ----- Methods: Cognitive performance was assessed for 30 young, visually normal subjects (M=22.0yrs ± 3.1 yrs) using the Digit Symbol Substitution Test (DSST), Trail Making Test (TMT) A and B and the Stroop Colour Word Test under three visual conditions: normal vision and two levels of visually degrading filters (VistechTM) administered in a random order. Distance visual acuity and contrast sensitivity were also assessed for each filter condition. ----- Results: The visual filters, which degraded contrast sensitivity to a greater extent than visual acuity, significantly increased the time to complete (p<0.05), but not the number of errors made, on the DSST and the TMT A and B and affected only some components of the Stroop test.----- Conclusions: Reduced contrast sensitivity had a marked effect on the speed but not the accuracy of performance on commonly used cognitive tests, even in young individuals; the implications of these findings are discussed.
Resumo:
Purpose. To investigate the functional impact of amblyopia in children, the performance of amblyopic and age-matched control children on a clinical test of eye movements was compared. The influence of visual factors on test outcome measures was explored. Methods. Eye movements were assessed with the Developmental Eye Movement (DEM) test, in a group of children with amblyopia (n = 39; age, 9.1 ± 0.9 years) of different causes (infantile esotropia, n = 7; acquired strabismus, n = 10; anisometropia, n = 8; mixed, n = 8; deprivation, n = 6) and in an age-matched control group (n = 42; age, 9.3 ± 0.4 years). LogMAR visual acuity (VA), stereoacuity, and refractive error were also recorded in both groups. Results. No significant difference was found between the amblyopic and age-matched control group for any of the outcome measures of the DEM (vertical time, horizontal time, number of errors and ratio(horizontal time/vertical time)). The DEM measures were not significantly related to VA in either eye, level of binocular function (stereoacuity), history of strabismus, or refractive error. Conclusions. The performance of amblyopic children on the DEM, a commonly used clinical measure of eye movements, has not previously been reported. Under habitual binocular viewing conditions, amblyopia has no effect on DEM outcome scores despite significant impairment of binocular vision and decreased VA in both the better and worse eye.
Resumo:
The paper compares three different methods of inclusion of current phasor measurements by phasor measurement units (PMUs) in the conventional power system state estimator. For each of the three methods, comprehensive formulation of the hybrid state estimator in the presence of conventional and PMU measurements is presented. The performance of the state estimator in the presence of conventional measurements and optimally placed PMUs is evaluated in terms of convergence characteristics and estimator accuracy. Test results on the IEEE 14-bus and IEEE 300-bus systems are analyzed to determine the best possible method of inclusion of PMU current phasor measurements.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.
Resumo:
OBJECTIVE: The accurate quantification of human diabetic neuropathy is important to define at-risk patients, anticipate deterioration, and assess new therapies. ---------- RESEARCH DESIGN AND METHODS: A total of 101 diabetic patients and 17 age-matched control subjects underwent neurological evaluation, neurophysiology tests, quantitative sensory testing, and evaluation of corneal sensation and corneal nerve morphology using corneal confocal microscopy (CCM). ---------- RESULTS: Corneal sensation decreased significantly (P = 0.0001) with increasing neuropathic severity and correlated with the neuropathy disability score (NDS) (r = 0.441, P < 0.0001). Corneal nerve fiber density (NFD) (P < 0.0001), nerve fiber length (NFL), (P < 0.0001), and nerve branch density (NBD) (P < 0.0001) decreased significantly with increasing neuropathic severity and correlated with NDS (NFD r = −0.475, P < 0.0001; NBD r = −0.511, P < 0.0001; and NFL r = −0.581, P < 0.0001). NBD and NFL demonstrated a significant and progressive reduction with worsening heat pain thresholds (P = 0.01). Receiver operating characteristic curve analysis for the diagnosis of neuropathy (NDS >3) defined an NFD of <27.8/mm2 with a sensitivity of 0.82 (95% CI 0.68–0.92) and specificity of 0.52 (0.40–0.64) and for detecting patients at risk of foot ulceration (NDS >6) defined a NFD cutoff of <20.8/mm2 with a sensitivity of 0.71 (0.42–0.92) and specificity of 0.64 (0.54–0.74). ---------- CONCLUSIONS: CCM is a noninvasive clinical technique that may be used to detect early nerve damage and stratify diabetic patients with increasing neuropathic severity. Established diabetic neuropathy leads to pain and foot ulceration. Detecting neuropathy early may allow intervention with treatments to slow or reverse this condition (1). Recent studies suggested that small unmyelinated C-fibers are damaged early in diabetic neuropathy (2–4) but can only be detected using invasive procedures such as sural nerve biopsy (4,5) or skin-punch biopsy (6–8). Our studies have shown that corneal confocal microscopy (CCM) can identify early small nerve fiber damage and accurately quantify the severity of diabetic neuropathy (9–11). We have also shown that CCM relates to intraepidermal nerve fiber loss (12) and a reduction in corneal sensitivity (13) and detects early nerve fiber regeneration after pancreas transplantation (14). Recently we have also shown that CCM detects nerve fiber damage in patients with Fabry disease (15) and idiopathic small fiber neuropathy (16) when results of electrophysiology tests and quantitative sensory testing (QST) are normal. In this study we assessed corneal sensitivity and corneal nerve morphology using CCM in diabetic patients stratified for the severity of diabetic neuropathy using neurological evaluation, electrophysiology tests, and QST. This enabled us to compare CCM and corneal esthesiometry with established tests of diabetic neuropathy and define their sensitivity and specificity to detect diabetic patients with early neuropathy and those at risk of foot ulceration.
Resumo:
There has been much conjecture of late as to whether the patentable subject matter standard contains a physicality requirement. The issue came to a head when the Federal Circuit introduced the machine-or-transformation test in In re Bilski and declared it to be the sole test for determining subject matter eligibility. Many commentators criticized the test, arguing that it is inconsistent with Supreme Court precedent and the need for the patent system to respond appropriately to all new and useful innovation in whatever form it arises. Those criticisms were vindicated when, on appeal, the Supreme Court in Bilski v. Kappos dispensed with any suggestion that the patentable subject matter test involves a physicality requirement. In this article, the issue is addressed from a normative perspective: it asks whether the patentable subject matter test should contain a physicality requirement. The conclusion reached is that it should not, because such a limitation is not an appropriate means of encouraging much of the valuable innovation we are likely to witness during the Information Age. It is contended that it is not only traditionally-recognized mechanical, chemical and industrial manufacturing processes that are patent eligible, but that patent eligibility extends to include non-machine implemented and non-physical methods that do not have any connection with a physical device and do not cause a physical transformation of matter. Concerns raised that there is a trend of overreaching commoditization or propertization, where the boundaries of patent law have been expanded too far, are unfounded since the strictures of novelty, nonobviousness and sufficiency of description will exclude undeserving subject matter from patentability. The argument made is that introducing a physicality requirement will have unintended adverse effects in various fields of technology, particularly those emerging technologies that are likely to have a profound social effect in the future.