964 resultados para Probable Number Technique
Resumo:
Saliva contains a number of biochemical components which may be useful for diagnosis/monitoring of metabolic disorders, and as markers of cancer or heart disease. Saliva collection is attractive as a non-invasive sampling method for infants and elderly patients. We present a method suitable for saliva collection from neonates. We have applied this technique for the determination of salivary nucleotide metabolites. Saliva was collected from 10 healthy neonates using washed cotton swabs, and directly from 10 adults. Two methods for saliva extraction from oral swabs were evaluated. The analytes were then separated using high performance liquid chromatography (HPLC) with tandem mass spectrometry (MS/MS). The limits of detection for 14 purine/pyrimidine metabolites were variable, ranging from 0.01 to 1.0 mu M. Recovery of hydrophobic purine/pyrimidine metabolites from cotton tips was consistently high using water/acetonitrile extraction (92.7-111%) compared with water extraction alone. The concentrations of these metabolites were significantly higher in neonatal saliva than in adults. Preliminary ranges for nucleotide metabolites in neonatal and adult saliva are reported. Hypoxanthine and xanthine were grossly raised in neonates (49.3 +/- 25.4; 30.9 +/- 19.5 mu M respectively) compared to adults (4.3 +/- 3.3; 4.6 +/- 4.5 mu M); nucleosides were also markedly raised in neonates. This study focuses on three essential details: contamination of oral swabs during manufacturing and how to overcome this; weighing swabs to accurately measure small saliva volumes; and methods for extracting saliva metabolites of interest from cotton swabs. A method is described for determining nucleotide metabolites using HPLC with photo-diode array or MS/MS. The advantages of utilising saliva are highlighted. Nucleotide metabolites were not simply in equilibrium with plasma, but may be actively secreted into saliva, and this process is more active in neonates than adults. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
PURPOSE The purpose of this study was to demonstrate the potential of near infrared (NIR) spectroscopy for characterizing the health and degenerative state of articular cartilage based on the components of the Mankin score. METHODS Three models of osteoarthritic degeneration induced in laboratory rats by anterior cruciate ligament (ACL) transection, meniscectomy (MSX), and intra-articular injection of monoiodoacetate (1 mg) (MIA) were used in this study. Degeneration was induced in the right knee joint; each model group consisted of 12 rats (N = 36). After 8 weeks, the animals were euthanized and knee joints were collected. A custom-made diffuse reflectance NIR probe of 5-mm diameter was placed on the tibial and femoral surfaces, and spectral data were acquired from each specimen in the wave number range of 4,000 to 12,500 cm(-1). After spectral data acquisition, the specimens were fixed and safranin O staining (SOS) was performed to assess disease severity based on the Mankin scoring system. Using multivariate statistical analysis, with spectral preprocessing and wavelength selection technique, the spectral data were then correlated to the structural integrity (SI), cellularity (CEL), and matrix staining (SOS) components of the Mankin score for all the samples tested. RESULTS ACL models showed mild cartilage degeneration, MSX models had moderate degeneration, and MIA models showed severe cartilage degenerative changes both morphologically and histologically. Our results reveal significant linear correlations between the NIR absorption spectra and SI (R(2) = 94.78%), CEL (R(2) = 88.03%), and SOS (R(2) = 96.39%) parameters of all samples in the models. In addition, clustering of the samples according to their level of degeneration, with respect to the Mankin components, was also observed. CONCLUSIONS NIR spectroscopic probing of articular cartilage can potentially provide critical information about the health of articular cartilage matrix in early and advanced stages of osteoarthritis (OA). CLINICAL RELEVANCE This rapid nondestructive method can facilitate clinical appraisal of articular cartilage integrity during arthroscopic surgery.
Resumo:
Most surgeons cement the tibial component in total knee replacement surgery. Mid-term registry data from a number of countries, including those from the United Kingdom and Australia, support the excellent survivorship of cemented tibial components. In spite of this success, results can always be improved, and cementing technique can play a role. Cementing technique on the tibia is not standardized, and surgeons still differ about the best ways to deliver cement into the cancellous bone of the upper tibia. Questions remain regarding whether to use a gun or a syringe to inject the cement into the cancellous bone of the tibial plateau . The ideal cement penetration into the tibial plateau is debated, though most reports suggest that 4 mm to 10 mm is ideal. Thicker mantles are thought to be dangerous due to the risk of bone necrosis, but there is little in the literature to support this contention...
Resumo:
This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.
Resumo:
In an estuary, mixing and dispersion are the result of the combination of large scale advection and small scale turbulence which are both complex to estimate. A field study was conducted in a small sub-tropical estuary in which high frequency (50 Hz) turbulent data were recorded continuously for about 48 hours. A triple decomposition technique was introduced to isolate the contributions of tides, resonance and turbulence in the flow field. A striking feature of the data set was the slow fluctuations which exhibited large amplitudes up to 50% the tidal amplitude under neap tide conditions. The triple decomposition technique allowed a characterisation of broader temporal scales of high frequency fluctuation data sampled during a number of full tidal cycles.
Resumo:
The estimation of the critical gap has been an issue since the 1970s, when gap acceptance was introduced to evaluate the capacity of unsignalized intersections. The critical gap is the shortest gap that a driver is assumed to accept. A driver’s critical gap cannot be measured directly and a number of techniques have been developed to estimate the mean critical gaps of a sample of drivers. This paper reviews the ability of the Maximum Likelihood technique and the Probability Equilibrium Method to predict the mean and standard deviation of the critical gap with a simulation of 100 drivers, repeated 100 times for each flow condition. The Maximum Likelihood method gave consistent and unbiased estimates of the mean critical gap. Whereas the probability equilibrium method had a significant bias that was dependent on the flow in the priority stream. Both methods were reasonably consistent, although the Maximum Likelihood Method was slightly better. If drivers are inconsistent, then again the Maximum Likelihood method is superior. A criticism levelled at the Maximum Likelihood method is that a distribution of the critical gap has to be assumed. It was shown that this does not significantly affect its ability to predict the mean and standard deviation of the critical gaps. Finally, the Maximum Likelihood method can predict reasonable estimates with observations for 25 to 30 drivers. A spreadsheet procedure for using the Maximum Likelihood method is provided in this paper. The PEM can be improved if the maximum rejected gap is used.
Resumo:
The DVD, Jump into Number, was a joint project between Independent Schools Queensland, Queensland University of Technology and Catholic Education (Diocese of Cairns) aimed at improving mathematical practice in the early years. Independent Schools Queensland Executive Director Dr John Roulston said the invaluable teaching resource features a series of unscripted lessons which demonstrate the possibilities of learning among young Indigenous students. “Currently there is a lack of teaching resources for numeracy in younger students, especially from pre Prep to Year 3 which is such an important stage of a child’s early education. Jump into Number is a benchmark for all teachers to learn more about the mathematical development of younger students,” Dr Roulston said.
Resumo:
This monograph provides an overview of recruitment learning approaches from a computational perspective. Recruitment learning is a unique machine learning technique that: (1) explains the physical or functional acquisition of new neurons in sparsely connected networks as a biologically plausible neural network method; (2) facilitates the acquisition of new knowledge to build and extend knowledge bases and ontologies as an artificial intelligence technique; (3) allows learning by use of background knowledge and a limited number of observations, consistent with psychological theory.
Resumo:
A monolithic stationary phase was prepared via free radical co-polymerization of ethylene glycol dimethacrylate (EDMA) and glycidyl methacrylate (GMA) with pore diameter tailored specifically for plasmid binding, retention and elution. The polymer was functionalized. with 2-chloro-N,N-diethylethylamine hydrochloride (DEAE-Cl) for anion-exchange purification of plasmid DNA (pDNA) from clarified lysate obtained from E. coli DH5α-pUC19 culture in a ribonuclease/ protease-free environment. Characterization of the monolithic resin showed a porous material, with 68% of the pores existing in the matrix having diameters above 300 nm. The final product isolated from a single-stage 5 min anion-exchange purification was a pure and homogeneous supercoiled (SC) pDNA with no gDNA, RNA and protein contamination as confirmed by ethidium bromide agarose gel electrophoresis (EtBr-AGE), enzyme restriction analysis and sodium dodecyl sulfate-polyacrylamide gel electrophoresis. This non-toxic technique is cGMP compatible and highly scalable for production of pDNA on a commercial level.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
We show that the cluster ion concentration (CIC) in the atmosphere is significantly suppressed during events that involve rapid increases in particle number concentration (PNC). Using a neutral cluster and air ion spectrometer, we investigated changes in CIC during three types of particle enhancement processes – new particle formation, a bushfire episode and an intense pyrotechnic display. In all three cases, the total CIC decreased with increasing PNC, with the rate of decrease being greater for negative CIC than positive. We attribute this to the greater mobility, and hence the higher attachment coefficient, of negative ions over positive ions in the air. During the pyrotechnic display, the rapid increase in PNC was sufficient to reduce the CIC of both polarities to zero. At the height of the display, the negative CIC stayed at zero for a full 10 min. Although the PNCs were not significantly different, the CIC during new particle formation did not decrease as much as during the bushfire episode and the pyrotechnic display. We suggest that the rate of increase of PNC, together with particle size, also play important roles in suppressing CIC in the atmosphere.
Resumo:
Bearing faults are the most common cause of wind turbine failures. Unavailability and maintenance cost of wind turbines are becoming critically important, with their fast growing in electric networks. Early fault detection can reduce outage time and costs. This paper proposes Anomaly Detection (AD) machine learning algorithms for fault diagnosis of wind turbine bearings. The application of this method on a real data set was conducted and is presented in this paper. For validation and comparison purposes, a set of baseline results are produced using the popular one-class SVM methods to examine the ability of the proposed technique in detecting incipient faults.
Resumo:
This paper explores a gap within the serious game design research. That gap is the ambiguity surrounding the process of aligning the instructional objectives of serious games with their core-gameplay i.e. the moment-to-moment activity that is the core of player interaction. A core-gameplay focused design framework is proposed that can work alongside existing, more broadly focused serious games design frameworks. The framework utilises an inquiry-based approach that allows the serious game designer to use key questions as a means to clearly outline instructional objectives with the core-gameplay. The use of this design framework is considered in the context of a small section of gameplay from an educational game currently in development. This demonstration of the framework brings shows how instructional objectives can be embedded into a serious games core-gameplay.