913 resultados para Successive Overrelaxation method with 2 parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The use of salivary diagnostics is increasing because of its noninvasiveness, ease of sampling, and the relatively low risk of contracting infectious organisms. Saliva has been used as a biological fluid to identify and validate RNA targets in head and neck cancer patients. The goal of this study was to develop a robust, easy, and cost-effective method for isolating high yields of total RNA from saliva for downstream expression studies. METHODS: Oral whole saliva (200 mu L) was collected from healthy controls (n = 6) and from patients with head and neck cancer (n = 8). The method developed in-house used QIAzol lysis reagent (Qiagen) to extract RNA from saliva (both cell-free supernatants and cell pellets), followed by isopropyl alcohol precipitation, cDNA synthesis, and real-time PCR analyses for the genes encoding beta-actin ("housekeeping" gene) and histatin (a salivary gland-specific gene). RESULTS: The in-house QIAzol lysis reagent produced a high yield of total RNA (0.89 -7.1 mu g) from saliva (cell-free saliva and cell pellet) after DNase treatment. The ratio of the absorbance measured at 260 nm to that at 280 nm ranged from 1.6 to 1.9. The commercial kit produced a 10-fold lower RNA yield. Using our method with the QIAzol lysis reagent, we were also able to isolate RNA from archived saliva samples that had been stored without RNase inhibitors at -80 degrees C for >2 years. CONCLUSIONS: Our in-house QIAzol method is robust, is simple, provides RNA at high yields, and can be implemented to allow saliva transcriptomic studies to be translated into a clinical setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops maximum likelihood (ML) estimation schemes for finite-state semi-Markov chains in white Gaussian noise. We assume that the semi-Markov chain is characterised by transition probabilities of known parametric from with unknown parameters. We reformulate this hidden semi-Markov model (HSM) problem in the scalar case as a two-vector homogeneous hidden Markov model (HMM) problem in which the state consist of the signal augmented by the time to last transition. With this reformulation we apply the expectation Maximumisation (EM ) algorithm to obtain ML estimates of the transition probabilities parameters, Markov state levels and noise variance. To demonstrate our proposed schemes, motivated by neuro-biological applications, we use a damped sinusoidal parameterised function for the transition probabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fractional FitzHugh–Nagumo monodomain model with zero Dirichlet boundary conditions is presented, generalising the standard monodomain model that describes the propagation of the electrical potential in heterogeneous cardiac tissue. The model consists of a coupled fractional Riesz space nonlinear reaction-diffusion model and a system of ordinary differential equations, describing the ionic fluxes as a function of the membrane potential. We solve this model by decoupling the space-fractional partial differential equation and the system of ordinary differential equations at each time step. Thus, this means treating the fractional Riesz space nonlinear reaction-diffusion model as if the nonlinear source term is only locally Lipschitz. The fractional Riesz space nonlinear reaction-diffusion model is solved using an implicit numerical method with the shifted Grunwald–Letnikov approximation, and the stability and convergence are discussed in detail in the context of the local Lipschitz property. Some numerical examples are given to show the consistency of our computational approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The finite element method in principle adaptively divides the continuous domain with complex geometry into discrete simple subdomain by using an approximate element function, and the continuous element loads are also converted into the nodal load by means of the traditional lumping and consistent load methods, which can standardise a plethora of element loads into a typical numerical procedure, but element load effect is restricted to the nodal solution. It in turn means the accurate continuous element solutions with the element load effects are merely restricted to element nodes discretely, and further limited to either displacement or force field depending on which type of approximate function is derived. On the other hand, the analytical stability functions can give the accurate continuous element solutions due to element loads. Unfortunately, the expressions of stability functions are very diverse and distinct when subjected to different element loads that deter the numerical routine for practical applications. To this end, this paper presents a displacement-based finite element function (generalised element load method) with a plethora of element load effects in the similar fashion that never be achieved by the stability function, as well as it can generate the continuous first- and second-order elastic displacement and force solutions along an element without loss of accuracy considerably as the analytical approach that never be achieved by neither the lumping nor consistent load methods. Hence, the salient and unique features of this paper (generalised element load method) embody its robustness, versatility and accuracy in continuous element solutions when subjected to the great diversity of transverse element loads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The floating-zone method with different growth ambiences has been used to selectively obtain hexagonal or orthorhombic DyMnO3 single crystals. The crystals were characterized by x-ray powder diffraction of ground specimens and a structure refinement as well as electron diffraction. We report magnetic susceptibility, magnetization and specific heat studies of this multiferroic compound in both the hexagonal and the orthorhombic structure. The hexagonal DyMnO3 shows magnetic ordering of Mn3+ (S = 2) spins on a triangular Mn lattice at T-N(Mn) = 57 K characterized by a cusp in the specific heat. This transition is not apparent in the magnetic susceptibility due to the frustration on the Mn triangular lattice and the dominating paramagnetic susceptibility of the Dy3+ (S = 9/2) spins. At T-N(Dy) = 3 K, a partial antiferromagnetic order of Dy moments has been observed. In comparison, the magnetic data for orthorhombic DyMnO3 display three transitions. The data broadly agree with results from earlier neutron diffraction experiments, which allows for the following assignment: a transition from an incommensurate antiferromagnetic ordering of Mn3+ spins at T-N(Mn) = 39 K, a lock-in transition at Tlock-in = 16 K and a second antiferromagnetic transition at T-N(Dy) = 5 K due to the ordering of Dy moments. Both the hexagonal and the orthorhombic crystals show magnetic anisotropy and complex magnetic properties due to 4f-4f and 4f-3d couplings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present investigation, two nozzle configurations are used for spray deposition, convergent nozzle (nozzle-A), and convergent nozzle with 2 mm parallel portion attached at its end (nozzle-C) without changing the exit area. First, the conditions for subambient aspiration pressure, i.e., pressure at the tip of the melt delivery tube, are established by varying the protrusion length of the melt delivery tube at different applied gas pressures for both of the nozzles. Using these conditions, spray deposits in a reproducible manner are successfully obtained for 7075 Al alloy. The effect of applied gas pressure, flight distance, and nozzle configuration on various characteristics of spray deposition, viz., yield, melt flow rate, and gas-to-metal ratio, is examined. The over-spray powder is also characterized with respect to powder size distribution, shape, and microstructure. Some of the results are explained with the help of numerical analysis presented in an earlier article.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis discusses the use of sub- and supercritical fluids as the medium in extraction and chromatography. Super- and subcritical extraction was used to separate essential oils from herbal plant Angelica archangelica. The effect of extraction parameters was studied and sensory analyses of the extracts were done by an expert panel. The results of the sensory analyses were compared to the analytically determined contents of the extracts. Sub- and supercritical fluid chromatography (SFC) was used to separate and purify high-value pharmaceuticals. Chiral SFC was used to separate the enantiomers of racemic mixtures of pharmaceutical compounds. Very low (cryogenic) temperatures were applied to substantially enhance the separation efficiency of chiral SFC. The thermodynamic aspects affecting the resolving ability of chiral stationary phases are briefly reviewed. The process production rate which is a key factor in industrial chromatography was optimized by empirical multivariate methods. General linear model was used to optimize the separation of omega-3 fatty acid ethyl esters from esterized fish oil by using reversed-phase SFC. Chiral separation of racemic mixtures of guaifenesin and ferulic acid dimer ethyl ester was optimized by using response surface method with three variables per time. It was found that by optimizing four variables (temperature, load, flowate and modifier content) the production rate of the chiral resolution of racemic guaifenesin by cryogenic SFC could be increased severalfold compared to published results of similar application. A novel pressure-compensated design of industrial high pressure chromatographic column was introduced, using the technology developed in building the deep-sea submersibles (Mir 1 and 2). A demonstration SFC plant was built and the immunosuppressant drug cyclosporine A was purified to meet the requirements of US Pharmacopoeia. A smaller semi-pilot size column with similar design was used for cryogenic chiral separation of aromatase inhibitor Finrozole for use in its development phase 2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The widespread and increasing resistance of internal parasites to anthelmintic control is a serious problem for the Australian sheep and wool industry. As part of control programmes, laboratories use the Faecal Egg Count Reduction Test (FECRT) to determine resistance to anthelmintics. It is important to have confidence in the measure of resistance, not only for the producer planning a drenching programme but also for companies investigating the efficacy of their products. The determination of resistance and corresponding confidence limits as given in anthelmintic efficacy guidelines of the Standing Committee on Agriculture (SCA) is based on a number of assumptions. This study evaluated the appropriateness of these assumptions for typical data and compared the effectiveness of the standard FECRT procedure with the effectiveness of alternative procedures. Several sets of historical experimental data from sheep and goats were analysed to determine that a negative binomial distribution was a more appropriate distribution to describe pre-treatment helminth egg counts in faeces than a normal distribution. Simulated egg counts for control animals were generated stochastically from negative binomial distributions and those for treated animals from negative binomial and binomial distributions. Three methods for determining resistance when percent reduction is based on arithmetic means were applied. The first was that advocated in the SCA guidelines, the second similar to the first but basing the variance estimates on negative binomial distributions, and the third using Wadley’s method with the distribution of the response variate assumed negative binomial and a logit link transformation. These were also compared with a fourth method recommended by the International Co-operation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH) programme, in which percent reduction is based on the geometric means. A wide selection of parameters was investigated and for each set 1000 simulations run. Percent reduction and confidence limits were then calculated for the methods, together with the number of times in each set of 1000 simulations the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been said to occur. These simulations provide the basis for setting conditions under which the methods could be recommended. The authors show that given the distribution of helminth egg counts found in Queensland flocks, the method based on arithmetic not geometric means should be used and suggest that resistance be redefined as occurring when the upper level of percent reduction is less than 95%. At least ten animals per group are required in most circumstances, though even 20 may be insufficient where effectiveness of the product is close to the cut off point for defining resistance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The Ewing sarcoma family of tumors (ESFT) are rare but highly malignant neoplasms that occur mainly in bone or but also in soft tissue. ESFT affects patients typically in their second decade of life, whereby children and adolescents bear the heaviest incidence burden. Despite recent advances in the clinical management of ESFT patients, their prognosis and survival are still disappointingly poor, especially in cases with metastasis. No targeted therapy for ESFT patients is currently available. Moreover, based merely on current clinical and biological characteristics, accurate classification of ESFT patients often fails at the time of diagnosis. Therefore, there is a constant need for novel molecular biomarkers to be applied in tandem with conventional parameters to further intensify ESFT risk-stratification and treatment selection, and ultimately to develop novel targeted therapies. In this context, a greater understanding of the genetics and immune characteristics of ESFT is needed. Aims: This study sought to open novel insights into gene copy number changes and gene expression in ESFT and, further, to enlighten the role of inflammation in ESFT. For this purpose, microarrays were used to provide gene-level information on a genomewide scale. In addition, this study focused on screening of 9p21.3 deletion sizes and frequencies in ESFT and, in another pediatric cancer, acute lymphocytic leukemia (ALL), in order to define more exact criteria for highrisk patient selection and to provide data for developing a more reliable diagnostic method to detect CDKN2A deletions. Results: In study I, 20 novel ESFT-associated suppressor genes and oncogenes were pinpointed using combined array CGH and expression analysis. In addition, interesting chromosomal rearrangements were identified: (1) Duplication of derivative chromosome der(22)(11;22) was detected in three ESFT patients. This duplication included the EWSR1-FLI1 fusion gene leading to increase in its copy number; (2) Cryptic amplifications on chromosomes 20 and 22 were detected, suggesting a novel translocation between chromosomes 20 and 22, which most probably produces a fusion between EWSR1 and NFATC2. In study II, bioinformatic analysis of ESFT expression profiles showed that inflammatory gene activation is detectable in ESFT patient samples and that the activation is characterized by macrophage gene expression. Most interestingly, ESFT patient samples were shown to express certain inflammatory genes that were prognostically significant. High local expression of C5 and JAK1 at the tumor site was shown to associate with favorable clinical outcome, whereas high local expression of IL8 was shown to be detrimental. Studies III and IV showed that the smallest overlapping region of deletion in 9p21.3 includes CDKN2A in all cases and that the length of this region is 12.2 kb in both Ewing sarcoma and ALL. Furthermore, our results showed that the most widely used commercial CDKN2A FISH probe creates false negative results in the narrowest microdeletion cases (<190 kb). Therefore, more accurate methods should be developed for the detection of deletions in the CDKN2A locus. Conclusions: This study provides novel insights into the genetic changes involved in the biology of ESFT, in the interaction between ESFT cells and immune system, and in the inactivation of CDKN2A. Novel ESFT biomarker genes identified in this study serve as a useful resource for future studies and in developing novel therapeutic strategies to improve the survival of patients with ESFT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the present study was to establish a valid transformation method of Haemophilus parasuis, the causative agent of Glasser's disease in pigs, using a novel H. parasuis-Escherichia coli shuttle vector. A 4.2 kb endogenous plasmid pYC93 was extracted from an H. parasuis field isolate and completely sequenced. Analysis of pYC93 revealed a region approximately 800 bp showing high homology with the defined replication origin oriV of pLS88, a native plasmid identified in Haemophilus ducreyi. Based on the origin region of pYC93, E. coli cloning vector pBluescript SK(+) and the Tn903 derived kanamycin cassette, a shuttle vector pSHK4 was constructed by overlapping PCR strategy. When electroporation of the 15 H. parasuis serovar reference strains and one clinical isolate SH0165 with pSHK4 was performed, only one of these strains yielded transformants with an efficiency of 8.5 x 10(2) CFUhlg of DNA. Transformation efficiency was notably increased (1.3 x 10(5) CFU/mu g of DNA) with vector DNA reisolated from the homologous transformants. This demonstrated that restriction-modification systems were involved in the barrier to transformation of H. parasuis. By utilizing an in vitro DNA modification method with cell-free extracts of the host H. parasuis strains, 15 out of 16 strains were transformable. The novel shuttle vector pSHK4 and the established electrotransformation method constitute useful tools for the genetic manipulation of H. parasuis to gain a better understanding of the pathogen. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The in vivo faecal egg count reduction test (FECRT) is the most commonly used test to detect anthelmintic resistance (AR) in gastrointestinal nematodes (GIN) of ruminants in pasture based systems. However, there are several variations on the method, some more appropriate than others in specific circumstances. While in some cases labour and time can be saved by just collecting post-drench faecal worm egg counts (FEC) of treatment groups with controls, or pre- and post-drench FEC of a treatment group with no controls, there are circumstances when pre- and post-drench FEC of an untreated control group as well as from the treatment groups are necessary. Computer simulation techniques were used to determine the most appropriate of several methods for calculating AR when there is continuing larval development during the testing period, as often occurs when anthelmintic treatments against genera of GIN with high biotic potential or high re-infection rates, such as Haemonchus contortus of sheep and Cooperia punctata of cattle, are less than 100% efficacious. Three field FECRT experimental designs were investigated: (I) post-drench FEC of treatment and controls groups, (II) pre- and post-drench FEC of a treatment group only and (III) pre- and post-drench FEC of treatment and control groups. To investigate the performance of methods of indicating AR for each of these designs, simulated animal FEC were generated from negative binominal distributions with subsequent sampling from the binomial distributions to account for drench effect, with varying parameters for worm burden, larval development and drench resistance. Calculations of percent reductions and confidence limits were based on those of the Standing Committee for Agriculture (SCA) guidelines. For the two field methods with pre-drench FEC, confidence limits were also determined from cumulative inverse Beta distributions of FEC, for eggs per gram (epg) and the number of eggs counted at detection levels of 50 and 25. Two rules for determining AR: (1) %reduction (%R) < 95% and lower confidence limit <90%; and (2) upper confidence limit <95%, were also assessed. For each combination of worm burden, larval development and drench resistance parameters, 1000 simulations were run to determine the number of times the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been declared. When continuing larval development occurs during the testing period of the FECRT, the simulations showed AR should be calculated from pre- and post-drench worm egg counts of an untreated control group as well as from the treatment group. If the widely used resistance rule 1 is used to assess resistance, rule 2 should also be applied, especially when %R is in the range 90 to 95% and resistance is suspected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Glaucoma is the second leading cause of blindness worldwide. Often, the optic nerve head (ONH) glaucomatous damage and ONH changes occur prior to visual field loss and are observable in vivo. Thus, digital image analysis is a promising choice for detecting the onset and/or progression of glaucoma. In this paper, we present a new framework for detecting glaucomatous changes in the ONH of an eye using the method of proper orthogonal decomposition (POD). A baseline topograph subspace was constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by comparing the follow-up ONH topography with its baseline topograph subspace representation. Image correspondence measures of L-1-norm and L-2-norm, correlation, and image Euclidean distance (IMED) were used to quantify the ONH changes. An ONH topographic library built from the Louisiana State University Experimental Glaucoma study was used to evaluate the performance of the proposed method. The area under the receiver operating characteristic curves (AUCs) was used to compare the diagnostic performance of the POD-induced parameters with the parameters of the topographic change analysis (TCA) method. The IMED and L-2-norm parameters in the POD framework provided the highest AUC of 0.94 at 10 degrees. field of imaging and 0.91 at 15 degrees. field of imaging compared to the TCA parameters with an AUC of 0.86 and 0.88, respectively. The proposed POD framework captures the instrument measurement variability and inherent structure variability and shows promise for improving our ability to detect glaucomatous change over time in glaucoma management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transfer matrix method is known to be well suited for a complete analysis of a lumped as well as distributed element, one-dimensional, linear dynamical system with a marked chain topology. However, general subroutines of the type available for classical matrix methods are not available in the current literature on transfer matrix methods. In the present article, general expressions for various aspects of analysis-viz., natural frequency equation, modal vectors, forced response and filter performance—have been evaluated in terms of a single parameter, referred to as velocity ratio. Subprograms have been developed for use with the transfer matrix method for the evaluation of velocity ratio and related parameters. It is shown that a given system, branched or straight-through, can be completely analysed in terms of these basic subprograms, on a stored program digital computer. It is observed that the transfer matrix method with the velocity ratio approach has certain advantages over the existing general matrix methods in the analysis of one-dimensional systems.