967 resultados para Rich Description Method
Resumo:
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.
Resumo:
A novel combined near- and mid-infrared (NIR and MIR) spectroscopic method has been researched and developed for the analysis of complex substances such as the Traditional Chinese Medicine (TCM), Illicium verum Hook. F. (IVHF), and its noxious adulterant, Iuicium lanceolatum A.C. Smith (ILACS). Three types of spectral matrix were submitted for classification with the use of the linear discriminant analysis (LDA) method. The data were pretreated with either the successive projections algorithm (SPA) or the discrete wavelet transform (DWT) method. The SPA method performed somewhat better, principally because it required less spectral features for its pretreatment model. Thus, NIR or MIR matrix as well as the combined NIR/MIR one, were pretreated by the SPA method, and then analysed by LDA. This approach enabled the prediction and classification of the IVHF, ILACS and mixed samples. The MIR spectral data produced somewhat better classification rates than the NIR data. However, the best results were obtained from the combined NIR/MIR data matrix with 95–100% correct classifications for calibration, validation and prediction. Principal component analysis (PCA) of the three types of spectral data supported the results obtained with the LDA classification method.
Resumo:
A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.
Resumo:
In this paper, we aim at predicting protein structural classes for low-homology data sets based on predicted secondary structures. We propose a new and simple kernel method, named as SSEAKSVM, to predict protein structural classes. The secondary structures of all protein sequences are obtained by using the tool PSIPRED and then a linear kernel on the basis of secondary structure element alignment scores is constructed for training a support vector machine classifier without parameter adjusting. Our method SSEAKSVM was evaluated on two low-homology datasets 25PDB and 1189 with sequence homology being 25% and 40%, respectively. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies on these two data sets are 86.3% and 84.5%, respectively, which are higher than those obtained by other existing methods. Especially, our method achieves higher accuracies (88.1% and 88.5%) for differentiating the α + β class and the α/β class compared to other methods. This suggests that our method is valuable to predict protein structural classes particularly for low-homology protein sequences. The source code of the method in this paper can be downloaded at http://math.xtu.edu.cn/myphp/math/research/source/SSEAK_source_code.rar.
A novel human leucocyte antigen-DRB1 genotyping method based on multiplex primer extension reactions
Resumo:
We have developed and validated a semi-automated fluorescent method of genotyping human leucocyte antigen (HLA)-DRB1 alleles, HLA-DRB1*01-16, by multiplex primer extension reactions. This method is based on the extension of a primer that anneals immediately adjacent to the single-nucleotide polymorphism with fluorescent dideoxynucleotide triphosphates (minisequencing), followed by analysis on an ABI Prism 3700 capillary electrophoresis instrument. The validity of the method was confirmed by genotyping 261 individuals using both this method and polymerase chain reaction with sequence-specific primer (PCR-SSP) or sequencing and by demonstrating Mendelian inheritance of HLA-DRB1 alleles in families. Our method provides a rapid means of performing high-throughput HLA-DRB1 genotyping using only two PCR reactions followed by four multiplex primer extension reactions and PCR-SSP for some allele groups. In this article, we describe the method and discuss its advantages and limitations.
Resumo:
Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.
Resumo:
By using the method of characteristics, the effect of footing-soil interface friction angle (delta) on the bearing capacity factor N-gamma was computed for a strip footing. The analysis was performed by employing a curved trapped wedge under the footing base; this wedge joins the footing base at a distance B-t from the footing edge. For a given footing width (B), the value of B-t increases continuously with a decrease in delta. For delta = 0, no trapped wedge exists below the footing base, that is, B-t/B = 0.5. On the contrary, with delta = phi, the point of emergence of the trapped wedge approaches toward the footing edge with an increase in phi. The magnitude of N-gamma increases substantially with an increase in delta/phi. The maximum depth of the plastic zone becomes higher for greater values of delta/phi. The results from the present analysis were found to compare well with those reported in the literature.
Resumo:
Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.
Resumo:
Reaction of 6-acetoxy-5-bromomethylquinoline (1c) and 2-bromomethyl-4-(2'-pyridyl)phenyl acetate (2b) with tetrachlorocatechol in acetone in the presence of anhydrous potassium carbonate resulted in the formation of diastereomeric products 3c, 3d, 4e and 4f.
Resumo:
Neutron diffraction measurement is carried out on GexSe1-x glasses, where 0.1 less than or equal to x less than or equal to 0.4, in a Q interval of 0.55-13.8 Angstrom(-1). The first sharp diffraction peak (FSDP) in the structure factor, S(Q), shows a systematic increase in the intensity and shifts to a lower Q with increasing Ge concentration. The coherence length of FSDP increases with x and becomes maximum for 0.33 less than or equal to x less than or equal to 0.4. The Monte-Carlo method, due to Soper, is used to generate S(Q) and also the pair correlation function, g(r). The generated S(Q) is in agreement with the experimental data for all x. Analysis of the first four peaks in the total correlation function, T(r), shows that the short range order in GeSe2 glass is due to Ge(Se-1/2)(4) tetrahedra, in agreement with earlier reports. Se-rich glasses contain Se-chains which are cross-linked with Ge(Se-1/2)(4) tetrahedra. Ge-2(Se-1/2)(6) molecular units are the basic structural units in Ge-rich, x = 0.4, glass. For x = 0.2, 0.33 and 0.4 there is evidence for some of the tetrahedra being in an edge-shared configuration. The number of edge-shared tetrahedra in these glasses increase with increasing Ge content.
Resumo:
We present a generalization of the finite volume evolution Galerkin scheme [M. Lukacova-Medvid'ova,J. Saibertov'a, G. Warnecke, Finite volume evolution Galerkin methods for nonlinear hyperbolic systems, J. Comp. Phys. (2002) 183 533-562; M. Luacova-Medvid'ova, K.W. Morton, G. Warnecke, Finite volume evolution Galerkin (FVEG) methods for hyperbolic problems, SIAM J. Sci. Comput. (2004) 26 1-30] for hyperbolic systems with spatially varying flux functions. Our goal is to develop a genuinely multi-dimensional numerical scheme for wave propagation problems in a heterogeneous media. We illustrate our methodology for acoustic waves in a heterogeneous medium but the results can be generalized to more complex systems. The finite volume evolution Galerkin (FVEG) method is a predictor-corrector method combining the finite volume corrector step with the evolutionary predictor step. In order to evolve fluxes along the cell interfaces we use multi-dimensional approximate evolution operator. The latter is constructed using the theory of bicharacteristics under the assumption of spatially dependent wave speeds. To approximate heterogeneous medium a staggered grid approach is used. Several numerical experiments for wave propagation with continuous as well as discontinuous wave speeds confirm the robustness and reliability of the new FVEG scheme.
Resumo:
The growing interest in co-created reading experiences in both digital and print formats raises interesting questions for creative writers who work in the space of interactive fiction. This essay argues that writers have not abandoned experiments with co-creation in print narratives in favour of the attractions of the digital environment, as might be assumed by the discourse on digital development. Rather, interactive print narratives, in particular ‘reader-assembled narratives’ demonstrate a rich history of experimentation and continue to engage writers who wish to craft individual reading experiences for readers and to experiment with their own creative process as writers. The reader-assembled narrative has been used for many different reasons and for some writers, such as BS Johnson it is a method of problem solving, for others, like Robert Coover, it is a way to engage the reader in a more playful sense. Authors such as Marc Saporta, BS Johnson, and Robert Coover have engaged with this type of narrative play. This examination considers the narrative experimentation of these authors as a way of offering insights into creative practice for contemporary creative writers.
Resumo:
Taylor (1948) suggested the method for determination of the settlement, d, corresponding to 90% consolidation utilizing the characteristics of the degree of consolidation, U, versus the square root of the time factor, square root of T, plot. Based on the properties of the slope of U versus square root of T curve, a new method is proposed to determine d corresponding to any U above 70% consolidation for evaluation of the coefficient of consolidation, Cn. The effects of the secondary consolidation on the Cn value at different percentages of consolidation can be studied. Cn, closer to the field values, can be determined in less time as compared to Taylor's method. At any U in between 75 and 95% consolidation, Cn(U) due to the new method lies in between Taylor's Cn and Casagrande's Cn.
Resumo:
A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP2O7]4H(2)O and [Co(en)(2)H2P3O10]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.