967 resultados para Average Method
Resumo:
Double diffusive Marangoni convection flow of viscous incompressible electrically conducting fluid in a square cavity is studied in this paper by taking into consideration of the effect of applied magnetic field in arbitrary direction and the chemical reaction. The governing equations are solved numerically by using alternate direct implicit (ADI) method together with the successive over relaxation (SOR) technique. The flow pattern with the effect of governing parameters, namely the buoyancy ratio W, diffusocapillary ratio w, and the Hartmann number Ha, is investigated. It is revealed from the numerical simulations that the average Nusselt number decreases; whereas the average Sherwood number increases as the orientation of magnetic field is shifted from horizontal to vertical. Moreover, the effect of buoyancy due to species concentration on the flow is stronger than the one due to thermal buoyancy. The increase in diffusocapillary parameter, w caus
Resumo:
Membrane proteins play important roles in many biochemical processes and are also attractive targets of drug discovery for various diseases. The elucidation of membrane protein types provides clues for understanding the structure and function of proteins. Recently we developed a novel system for predicting protein subnuclear localizations. In this paper, we propose a simplified version of our system for predicting membrane protein types directly from primary protein structures, which incorporates amino acid classifications and physicochemical properties into a general form of pseudo-amino acid composition. In this simplified system, we will design a two-stage multi-class support vector machine combined with a two-step optimal feature selection process, which proves very effective in our experiments. The performance of the present method is evaluated on two benchmark datasets consisting of five types of membrane proteins. The overall accuracies of prediction for five types are 93.25% and 96.61% via the jackknife test and independent dataset test, respectively. These results indicate that our method is effective and valuable for predicting membrane protein types. A web server for the proposed method is available at http://www.juemengt.com/jcc/memty_page.php
Resumo:
Background Nicotiana benthamiana is an allo-tetraploid plant, which can be challenging for de novo transcriptome assemblies due to homeologous and duplicated gene copies. Transcripts generated from such genes can be distinct yet highly similar in sequence, with markedly differing expression levels. This can lead to unassembled, partially assembled or mis-assembled contigs. Due to the different properties of de novo assemblers, no one assembler with any one given parameter space can re-assemble all possible transcripts from a transcriptome. Results In an effort to maximise the diversity and completeness of de novo assembled transcripts, we utilised four de novo transcriptome assemblers, TransAbyss, Trinity, SOAPdenovo-Trans, and Oases, using a range of k-mer sizes and different input RNA-seq read counts. We complemented the parameter space biologically by using RNA from 10 plant tissues. We then combined the output of all assemblies into a large super-set of sequences. Using a method from the EvidentialGene pipeline, the combined assembly was reduced from 9.9 million de novo assembled transcripts to about 235,000 of which about 50,000 were classified as primary. Metrics such as average bit-scores, feature response curves and the ability to distinguish paralogous or homeologous transcripts, indicated that the EvidentialGene processed assembly was of high quality. Of 35 RNA silencing gene transcripts, 34 were identified as assembled to full length, whereas in a previous assembly using only one assembler, 9 of these were partially assembled. Conclusions To achieve a high quality transcriptome, it is advantageous to implement and combine the output from as many different de novo assemblers as possible. We have in essence taking the ‘best’ output from each assembler while minimising sequence redundancy. We have also shown that simultaneous assessment of a variety of metrics, not just focused on contig length, is necessary to gauge the quality of assemblies.
Resumo:
Objective: To illustrate a new method for simplifying patient recruitment for advanced prostate cancer clinical trials using natural language processing techniques. Background: The identification of eligible participants for clinical trials is a critical factor to increase patient recruitment rates and an important issue for discovery of new treatment interventions. The current practice of identifying eligible participants is highly constrained due to manual processing of disparate sources of unstructured patient data. Informatics-based approaches can simplify the complex task of evaluating patient’s eligibility for clinical trials. We show that an ontology-based approach can address the challenge of matching patients to suitable clinical trials. Methods: The free-text descriptions of clinical trial criteria as well as patient data were analysed. A set of common inclusion and exclusion criteria was identified through consultations with expert clinical trial coordinators. A research prototype was developed using Unstructured Information Management Architecture (UIMA) that identified SNOMED CT concepts in the patient data and clinical trial description. The SNOMED CT concepts model the standard clinical terminology that can be used to represent and evaluate patient’s inclusion/exclusion criteria for the clinical trial. Results: Our experimental research prototype describes a semi-automated method for filtering patient records using common clinical trial criteria. Our method simplified the patient recruitment process. The discussion with clinical trial coordinators showed that the efficiency in patient recruitment process measured in terms of information processing time could be improved by 25%. Conclusion: An UIMA-based approach can resolve complexities in patient recruitment for advanced prostate cancer clinical trials.
Resumo:
A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models
Resumo:
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.
Resumo:
Many protocols have been used for extraction of DNA from Thraustochytrids. These generally involve the use of CTAB, phenol/chloroform and ethanol. They also feature mechanical grinding, sonication, N2 freezing or bead beating. However, the resulting chemical and physical damage to extracted DNA reduces its quality. The methods are also unsuitable for large numbers of samples. Commercially-available DNA extraction kits give better quality and yields but are expensive. Therefore, an optimized DNA extraction protocol was developed which is suitable for Thraustochytrids to both minimise expensive and time-consuming steps prior to DNA extraction and also to improve the yield. The most effective method is a combination of single bead in TissueLyser (Qiagen) and Proteinase K. Results were conclusive: both the quality and the yield of extracted DNA were higher than with any other method giving an average yield of 8.5 µg/100 mg biomass.
Resumo:
Many protocols have been used for extraction of DNA from Thraustochytrids. These generally involve the use of CTAB, phenol/chloroform and ethanol. They also feature mechanical grinding, sonication, N2 freezing or bead beating. However, the resulting chemical and physical damage to extracted DNA reduces its quality. The methods are also unsuitable for large numbers of samples. Commercially-available DNA extraction kits give better quality and yields but are expensive. Therefore, an optimized DNA extraction protocol was developed which is suitable for Thraustochytrids to both minimise expensive and time-consuming steps prior to DNA extraction and also to improve the yield. The most effective method is a combination of single bead in TissueLyser (Qiagen) and Proteinase K. Results were conclusive: both the quality and the yield of extracted DNA were higher than with any other method giving an average yield of 8.5 µg/100 mg biomass.
Resumo:
A novel combined near- and mid-infrared (NIR and MIR) spectroscopic method has been researched and developed for the analysis of complex substances such as the Traditional Chinese Medicine (TCM), Illicium verum Hook. F. (IVHF), and its noxious adulterant, Iuicium lanceolatum A.C. Smith (ILACS). Three types of spectral matrix were submitted for classification with the use of the linear discriminant analysis (LDA) method. The data were pretreated with either the successive projections algorithm (SPA) or the discrete wavelet transform (DWT) method. The SPA method performed somewhat better, principally because it required less spectral features for its pretreatment model. Thus, NIR or MIR matrix as well as the combined NIR/MIR one, were pretreated by the SPA method, and then analysed by LDA. This approach enabled the prediction and classification of the IVHF, ILACS and mixed samples. The MIR spectral data produced somewhat better classification rates than the NIR data. However, the best results were obtained from the combined NIR/MIR data matrix with 95–100% correct classifications for calibration, validation and prediction. Principal component analysis (PCA) of the three types of spectral data supported the results obtained with the LDA classification method.
Resumo:
A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.
Resumo:
In this paper, we aim at predicting protein structural classes for low-homology data sets based on predicted secondary structures. We propose a new and simple kernel method, named as SSEAKSVM, to predict protein structural classes. The secondary structures of all protein sequences are obtained by using the tool PSIPRED and then a linear kernel on the basis of secondary structure element alignment scores is constructed for training a support vector machine classifier without parameter adjusting. Our method SSEAKSVM was evaluated on two low-homology datasets 25PDB and 1189 with sequence homology being 25% and 40%, respectively. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies on these two data sets are 86.3% and 84.5%, respectively, which are higher than those obtained by other existing methods. Especially, our method achieves higher accuracies (88.1% and 88.5%) for differentiating the α + β class and the α/β class compared to other methods. This suggests that our method is valuable to predict protein structural classes particularly for low-homology protein sequences. The source code of the method in this paper can be downloaded at http://math.xtu.edu.cn/myphp/math/research/source/SSEAK_source_code.rar.
Resumo:
Estimating the use of illicit drugs in the general community is an important task with ramifications for law enforcement agencies, as well as health portfolios. Australia has four ongoing drug monitoring systems, including the AIC’s DUMA program, the National Drug Strategy Household Survey, the Illicit Drug Reporting System and the Ecstasy and Related Drug Reporting System. The systems vary in methods, but broadly they are reliant upon self-report data and may be subject to selection biases. The present study employed a completely different method. By chemically analysing sewerage water, the study produced daily estimates of consumption of methamphetamine, MDMA and cocaine. Samples were collected in November 2009 and November 2010 from a municipality in Queensland, with an population of over 150,000 people. Estimates were made of the average daily dose and average daily street value per 1,000 people. On the basis of estimated dose and price, the methamphetamine market appeared considerably stronger than either MDMA or cocaine. This paper explains the strengths and weaknesses of wastewater analysis. It considers the potential value of wastewater analysis in measuring net consumption of illicit drugs and the effectiveness of law enforcement agency strategies.
Resumo:
Background The benefits and safety transcutaneous bone anchored prosthesis relying on a screw fixation are well reported.[1-17] However, most of the studies on press-fit implants and joint replacement technology have focused on surgical techniques.[3, 18-23] One European centre using this technique has reported on health related quality of life (HRQOL) for a group of individuals with transfemoral amputation (TFA).[3] Data from other centres are needed to assess the effectiveness of the technique in different settings. Aim This study aimed at reporting HRQOL data at baseline and up to 2-year follow-up for a group of TFAs treated by Osseointegration Group of Australia who followed the Osseointegration Group of Australia Accelerated Protocol (OGAAP), in Sydney between 08/12/2011 and 09/04/2014. Method A total of 16 TFAs (7 females and 9 males, age 51 ± 12 y, height 1.73 ± 0.12 m, weight 83 ±18 kg) participated in this study. The cause of amputation was trauma or congenital limb deficiency for 11 (69%) and 5 (31%) participants, respectively. A total of 12 (75%) participants were prosthetic users while 4(25%) were wheelchair bound prior the surgery. The HRQOL were obtained from Questionnaire for Persons with Transfemoral Amputation (Q-TFA) using the four main scales (i.e., Prosthetic use, Mobility, Problem, Global) one year before and between 6.5 and 24 months after the Stage 1 of the surgeries for the baseline and follow-up, respectively. Results The lapse of time before and after Stage 1 was -6.19±3.54 and 10.83±3.58 months respectively. The raw score and percentage of improvement are presented in Figures 1 and 2, respectively. Discussion & Conclusion The average results demonstrated an improvement in each domain, particularly in the reduction of problems and an increase in global state. Furthermore, 56%, 75%, 94% and 69% of the participants reported an improvement in Prosthetic use, Mobility, Problem, Global scales, respectively. These results were comparable to previous studies relying of screwed fixation confirming that press-fit implantation is a viable alternative for bone-anchored prostheses.[1, 7, 8]
Resumo:
Bone and joint diseases are major causes of morbidity and mortality worldwide, and their prevalence is increasing as the average population age increases. Most common musculoskeletal diseases show significant heritability, and few have treatments that prevent disease or can induce true treatment-free, disease-free remission. Furthermore, despite valiant efforts of hypothesis-driven research, our understanding of the etiopathogenesis of these conditions is, with few exceptions, at best moderate. Therefore, there has been a long-standing interest in genetics research in musculoskeletal disease as a hypothesis-free method for investigating disease etiopathogenesis. Important contributions have been made through the identification of monogenic causes of disease, but the holy grail of human genetics research has been the identification of the genes responsible for common diseases. The development of genome-wide association (GWA) studies has revolutionized this field, and led to an explosion in the number of genes identified that are definitely involved in musculoskeletal disease pathogenesis. However, this approach will not identify all common disease genes, and although the current progress is exciting and proves the potential of this research discipline, other approaches will be required to identify many of the types of genetic variation likely to be involved.