522 resultados para Microwavehydrothermal method
Resumo:
Purpose To develop a signal processing paradigm for extracting ERG responses to temporal sinusoidal modulation with contrasts ranging from below perceptual threshold to suprathreshold contrasts. To estimate the magnitude of intrinsic noise in ERG signals at different stimulus contrasts. Methods Photopic test stimuli were generated using a 4-primary Maxwellian view optical system. The 4-primary lights were sinusoidally temporally modulated in-phase (36 Hz; 2.5 - 50% Michelson). The stimuli were presented in 1 s epochs separated by a 1 ms blank interval and repeated 160 times (160.16 s duration) during the recording of the continuous flicker ERG from the right eye using DTL fiber electrodes. After artefact rejection, the ERG signal was extracted using Fourier methods in each of the 1 s epochs where a stimulus was presented. The signal processing allows for computation of the intrinsic noise distribution in addition to the signal to noise (SNR) ratio. Results We provide the initial report that the ERG intrinsic noise distribution is independent of stimulus contrast whereas SNR decreases linearly with decreasing contrast until the noise limit at ~2.5%. The 1ms blank intervals between epochs de-correlated the ERG signal at the line frequency (50 Hz) and thus increased the SNR of the averaged response. We confirm that response amplitude increases linearly with stimulus contrast. The phase response shows a shallow positive relationship with stimulus contrast. Conclusions This new technique will enable recording of intrinsic noise in ERG signals above and below perceptual visual threshold and is suitable for measurement of continuous rod and cone ERGs across a range of temporal frequencies, and post-receptoral processing in the primary retinogeniculate pathways at low stimulus contrasts. The intrinsic noise distribution may have application as a biomarker for detecting changes in disease progression or treatment efficacy.
Resumo:
A novel and economical experimental technique has been developed to assess industrial aerosol deposition in various idealized porous channel configurations. This judicious examination on aerosol penetration in porous channels will assist engineers to better optimize designs for various engineering applications. Deposition patterns differ with porosity due to geometric configurations of the channel and superficial inlet velocities. Interestingly, it is found that two configurations of similar porosity exhibit significantly higher deposition fractions. Inertial impaction is profound at the leading edge of all obstacles, whereas particle build-up is observed at the trailing edge of the obstructions. A qualitative analysis shows that the numerical results are in good agreement with experimental results.
Resumo:
INTRODUCTION There is a large range in the reported prevalence of end plate lesions (EPLs), sometimes referred to as Schmorl's nodes in the general population (3.8-76%). One possible reason for this large range is the differences in definitions used by authors. Previous research has suggested that EPLs may potentially be a primary disturbance of growth plates that leads to the onset of scoliosis. The aim of this study was to develop a technique to measure the size, prevalence and location of EPLs on Computed Tomography (CT) images of scoliosis patients in a consistent manner. METHODS A detection algorithm was developed and applied to measure EPLs for five adolescent females with idiopathic scoliosis (average age 15.1 years, average major Cobb 60°). In this algorithm, the EPL definition was based on the lesion depth, the distance from the edge of the vertebral body and the gradient of the lesion edge. Existing low-dose, CT scans of the patients' spines were segmented semi-automatically to extract 3D vertebral endplate morphology. Manual sectioning of any attachments between posterior elements of adjacent vertebrae and, if necessary, endplates was carried out before the automatic algorithm was used to determine the presence and position of EPLs. RESULTS EPLs were identified in 15 of the 170 (8.8%) endplates analysed with an average depth of 3.1mm. 73% of the EPLs were seen in the lumbar spines (11/15). A sensitivity study demonstrated that the algorithm was most sensitive to changes in the minimum gradient required at the lesion edge. CONCLUSION An imaging analysis technique for consistent measurement of the prevalence, location and size of EPLs on CT images has been developed. Although the technique was tested on scoliosis patients, it can be used to analyse other populations without observer errors in EPL definitions.
Resumo:
Law is narration: it is narrative, narrator and the narrated. As a narrative, the law is constituted by a constellation of texts – from official sources such as statutes, treaties and cases, to private arrangements such as commercial contracts, deeds and parenting plans. All are a collection of stories: cases are narrative contests of facts and rights; statutes are recitations of the substantive and procedural bases for social, economic and political interactions; private agreements are plots for future relationships, whether personal or professional. As a narrator, law speaks in the language of modern liberalism. It describes its world in abstractions rather than in concrete experience, universal principles rather than individual subjectivity. It casts people into ‘parties’ to legal relationships; structures human interactions into ‘issues’ or ‘problems’; and tells individual stories within larger narrative arcs such as ‘the rule of law’ and ‘the interests of justice’. As the narrated, the law is a character in its own story. The scholarship of law, for example, is a type of story-telling with law as its central character. For positivists, still the dominant group in the legal genre, law is a closed system of formal rules with an “immanent rationality” and its own “structure, substantive content, procedure and tradition,” dedicated to finality of judgment. For scholars inspired by the interpretative tradition in the humanities, law is a more ambivalent character, susceptible to influences from outside its realm and masking a hidden ideological agenda under its cloak of universality and neutrality. For social scientists, law is a protagonist on a wider social stage, impacting on society, the economy and the polity is often surprising ways.
Resumo:
Law is saturated with stories. People tell their stories to lawyers; lawyers tell their client's stories to courts; and legislators develop regulation to respond to their constituent's stories of injustice or inequality. My approach to first-year legal education respects this narrative tradition. Both my curriculum design and assessment scheme in the compulsory first-year subject Australian Legal System deploy narrative methodology as the central teaching and learning device. Throughout the course, students work on resolving the problems of four hypothetical clients. Like a murder mystery, pieces of the puzzle come together as students learn more about legal institutions and the texts they produce, the process of legal research, the analysis and interpretation of primary legal sources, the steps in legal problem-solving, the genre conventions of legal writing style, the practical skills and ethical dimensions of professional practice, and critical inquiry into the normative underpinnings and impacts of the law. The assessment scheme mirrors this design. In their portfolio-based assignment, for example, students devise their own client profile, research the client's legal position and prepare a memorandum of advice.
Resumo:
To The ratcheting behavior of high-strength rail steel (Australian Standard AS1085.1) is studied in this work for the purpose of predicting wear and damage to the rail surface. Historically, researchers have used circular test coupons obtained from the rail head to conduct cyclic load tests, but according to hardness profile data, considerable variation exists across the rail head section. For example, the induction-hardened rail (AS1085.1) shows high hardness (400-430 HV100) up to four-millimeters into the rail head’s surface, but then drops considerably beyond that. Given that cyclic test coupons five millimeters in diameter at the gauge area are usually taken from the rail sample, there is a high probability that the original surface properties of the rail do not apply across the entire test coupon and, therefore, data representing only average material properties are obtained. In the literature, disks (47 mm in diameter) for a twin-disk rolling contact test machine have been obtained directly from the rail sample and used to validate rolling contact fatigue wear models. The question arises: How accurate are such predictions? In this research paper, the effect of rail sampling position on the ratcheting behavior of AS1085.1 rail steel was investigated using rectangular shaped specimens. Uniaxial stress-controlled tests were conducted with samples obtained at four different depths to observe the ratcheting behaviour of each. Micro-hardness measurements of the test coupons were carried out to obtain a constitutive relationship to predict the effect of depth on the ratcheting behaviour of the rail material. This work ultimately assists the selection of valid material parameters for constitutive models in the study of rail surface ratcheting.
Resumo:
Purified proteins are mandatory for molecular, immunological and cellular studies. However, purification of proteins from complex mixtures requires specialised chromatography methods (i.e., gel filtration, ion exchange, etc.) using fast protein liquid chromatography (FPLC) or high-performance liquid chromatography (HPLC) systems. Such systems are expensive and certain proteins require two or more different steps for sufficient purity and generally result in low recovery. The aim of this study was to develop a rapid, inexpensive and efficient gel-electrophoresis-based protein purification method using basic and readily available laboratory equipment. We have used crude rye grass pollen extract to purify the major allergens Lol p 1 and Lol p 5 as the model protein candidates. Total proteins were resolved on large primary gel and Coomassie Brilliant Blue (CBB)-stained Lol p 1/5 allergens were excised and purified on a secondary "mini"-gel. Purified proteins were extracted from unstained separating gels and subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and immunoblot analyses. Silver-stained SDS-PAGE gels resolved pure proteins (i.e., 875 μg of Lol p 1 recovered from a 8 mg crude starting material) while immunoblot analysis confirmed immunological reactivity of the purified proteins. Such a purification method is rapid, inexpensive, and efficient in generating proteins of sufficient purity for use in monoclonal antibody (mAb) production, protein sequencing and general molecular, immunological, and cellular studies.
Resumo:
Background Genetic testing is recommended when the probability of a disease-associated germline mutation exceeds 10%. Germline mutations are found in approximately 25% of individuals with phaeochromcytoma (PCC) or paraganglioma (PGL); however, genetic heterogeneity for PCC/PGL means many genes may require sequencing. A phenotype-directed iterative approach may limit costs but may also delay diagnosis, and will not detect mutations in genes not previously associated with PCC/PGL. Objective To assess whether whole exome sequencing (WES) was efficient and sensitive for mutation detection in PCC/PGL. Methods Whole exome sequencing was performed on blinded samples from eleven individuals with PCC/PGL and known mutations. Illumina TruSeq™ (Illumina Inc, San Diego, CA, USA) was used for exome capture of seven samples, and NimbleGen SeqCap EZ v3.0 (Roche NimbleGen Inc, Basel, Switzerland) for five samples (one sample was repeated). Massive parallel sequencing was performed on multiplexed samples. Sequencing data were called using Genome Analysis Toolkit and annotated using annovar. Data were assessed for coding variants in RET, NF1, VHL, SDHD, SDHB, SDHC, SDHA, SDHAF2, KIF1B, TMEM127, EGLN1 and MAX. Target capture of five exome capture platforms was compared. Results Six of seven mutations were detected using Illumina TruSeq™ exome capture. All five mutations were detected using NimbleGen SeqCap EZ v3.0 platform, including the mutation missed using Illumina TruSeq™ capture. Target capture for exons in known PCC/PGL genes differs substantially between platforms. Exome sequencing was inexpensive (<$A800 per sample for reagents) and rapid (results <5 weeks from sample reception). Conclusion Whole exome sequencing is sensitive, rapid and efficient for detection of PCC/PGL germline mutations. However, capture platform selection is critical to maximize sensitivity.
Thinking like Disney: Supporting the Disney method using ambient feedback based on group performance
Resumo:
The Disney method is a collaborative creativity technique that uses three roles - dreamer, realist and critic - to facilitate the consideration of different perspectives on a topic. Especially for novices it is important to obtain guidance in applying this method. One way is providing groups with a trained moderator. However, feedback about the group’s behavior might interrupt the flow of the idea finding process. We built and evaluated a system that provides ambient feedback to a group about the distribution of their statements among the three roles. Our preliminary field study indicates that groups supported by the system contribute more and roles are used in a more balanced way while the visualization does not disrupt the group work.
Resumo:
The sheep (Ovis aries) is favored by many musculoskeletal tissue engineering groups as a large animal model because of its docile temperament and ease of husbandry. The size and weight of sheep are comparable to humans, which allows for the use of implants and fixation devices used in human clinical practice. The construction of a complimentary DNA (cDNA) library can capture the expression of genes in both a tissue- and time-specific manner. cDNA libraries have been a consistent source of gene discovery ever since the technology became commonplace more than three decades ago. Here, we describe the construction of a cDNA library using cells derived from sheep bones based on the pBluescript cDNA kit. Thirty clones were picked at random and sequenced. This led to the identification of a novel gene, C12orf29, which our initial experiments indicate is involved in skeletal biology. We also describe a polymerase chain reaction-based cDNA clone isolation method that allows the isolation of genes of interest from a cDNA library pool. The techniques outlined here can be applied in-house by smaller tissue engineering groups to generate tools for biomolecular research for large preclinical animal studies and highlights the power of standard cDNA library protocols to uncover novel genes.
Resumo:
Membrane proteins play important roles in many biochemical processes and are also attractive targets of drug discovery for various diseases. The elucidation of membrane protein types provides clues for understanding the structure and function of proteins. Recently we developed a novel system for predicting protein subnuclear localizations. In this paper, we propose a simplified version of our system for predicting membrane protein types directly from primary protein structures, which incorporates amino acid classifications and physicochemical properties into a general form of pseudo-amino acid composition. In this simplified system, we will design a two-stage multi-class support vector machine combined with a two-step optimal feature selection process, which proves very effective in our experiments. The performance of the present method is evaluated on two benchmark datasets consisting of five types of membrane proteins. The overall accuracies of prediction for five types are 93.25% and 96.61% via the jackknife test and independent dataset test, respectively. These results indicate that our method is effective and valuable for predicting membrane protein types. A web server for the proposed method is available at http://www.juemengt.com/jcc/memty_page.php
Resumo:
Objective: To illustrate a new method for simplifying patient recruitment for advanced prostate cancer clinical trials using natural language processing techniques. Background: The identification of eligible participants for clinical trials is a critical factor to increase patient recruitment rates and an important issue for discovery of new treatment interventions. The current practice of identifying eligible participants is highly constrained due to manual processing of disparate sources of unstructured patient data. Informatics-based approaches can simplify the complex task of evaluating patient’s eligibility for clinical trials. We show that an ontology-based approach can address the challenge of matching patients to suitable clinical trials. Methods: The free-text descriptions of clinical trial criteria as well as patient data were analysed. A set of common inclusion and exclusion criteria was identified through consultations with expert clinical trial coordinators. A research prototype was developed using Unstructured Information Management Architecture (UIMA) that identified SNOMED CT concepts in the patient data and clinical trial description. The SNOMED CT concepts model the standard clinical terminology that can be used to represent and evaluate patient’s inclusion/exclusion criteria for the clinical trial. Results: Our experimental research prototype describes a semi-automated method for filtering patient records using common clinical trial criteria. Our method simplified the patient recruitment process. The discussion with clinical trial coordinators showed that the efficiency in patient recruitment process measured in terms of information processing time could be improved by 25%. Conclusion: An UIMA-based approach can resolve complexities in patient recruitment for advanced prostate cancer clinical trials.
Resumo:
A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models
Resumo:
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.