653 resultados para Acoustic method
Resumo:
Objective: To illustrate a new method for simplifying patient recruitment for advanced prostate cancer clinical trials using natural language processing techniques. Background: The identification of eligible participants for clinical trials is a critical factor to increase patient recruitment rates and an important issue for discovery of new treatment interventions. The current practice of identifying eligible participants is highly constrained due to manual processing of disparate sources of unstructured patient data. Informatics-based approaches can simplify the complex task of evaluating patient’s eligibility for clinical trials. We show that an ontology-based approach can address the challenge of matching patients to suitable clinical trials. Methods: The free-text descriptions of clinical trial criteria as well as patient data were analysed. A set of common inclusion and exclusion criteria was identified through consultations with expert clinical trial coordinators. A research prototype was developed using Unstructured Information Management Architecture (UIMA) that identified SNOMED CT concepts in the patient data and clinical trial description. The SNOMED CT concepts model the standard clinical terminology that can be used to represent and evaluate patient’s inclusion/exclusion criteria for the clinical trial. Results: Our experimental research prototype describes a semi-automated method for filtering patient records using common clinical trial criteria. Our method simplified the patient recruitment process. The discussion with clinical trial coordinators showed that the efficiency in patient recruitment process measured in terms of information processing time could be improved by 25%. Conclusion: An UIMA-based approach can resolve complexities in patient recruitment for advanced prostate cancer clinical trials.
Resumo:
A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models
Resumo:
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.
Resumo:
Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.
Resumo:
A novel combined near- and mid-infrared (NIR and MIR) spectroscopic method has been researched and developed for the analysis of complex substances such as the Traditional Chinese Medicine (TCM), Illicium verum Hook. F. (IVHF), and its noxious adulterant, Iuicium lanceolatum A.C. Smith (ILACS). Three types of spectral matrix were submitted for classification with the use of the linear discriminant analysis (LDA) method. The data were pretreated with either the successive projections algorithm (SPA) or the discrete wavelet transform (DWT) method. The SPA method performed somewhat better, principally because it required less spectral features for its pretreatment model. Thus, NIR or MIR matrix as well as the combined NIR/MIR one, were pretreated by the SPA method, and then analysed by LDA. This approach enabled the prediction and classification of the IVHF, ILACS and mixed samples. The MIR spectral data produced somewhat better classification rates than the NIR data. However, the best results were obtained from the combined NIR/MIR data matrix with 95–100% correct classifications for calibration, validation and prediction. Principal component analysis (PCA) of the three types of spectral data supported the results obtained with the LDA classification method.
Resumo:
A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.
Resumo:
In this paper, we aim at predicting protein structural classes for low-homology data sets based on predicted secondary structures. We propose a new and simple kernel method, named as SSEAKSVM, to predict protein structural classes. The secondary structures of all protein sequences are obtained by using the tool PSIPRED and then a linear kernel on the basis of secondary structure element alignment scores is constructed for training a support vector machine classifier without parameter adjusting. Our method SSEAKSVM was evaluated on two low-homology datasets 25PDB and 1189 with sequence homology being 25% and 40%, respectively. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies on these two data sets are 86.3% and 84.5%, respectively, which are higher than those obtained by other existing methods. Especially, our method achieves higher accuracies (88.1% and 88.5%) for differentiating the α + β class and the α/β class compared to other methods. This suggests that our method is valuable to predict protein structural classes particularly for low-homology protein sequences. The source code of the method in this paper can be downloaded at http://math.xtu.edu.cn/myphp/math/research/source/SSEAK_source_code.rar.
Resumo:
Acoustic recordings play an increasingly important role in monitoring terrestrial environments. However, due to rapid advances in technology, ecologists are accumulating more audio than they can listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings by calculating acoustic indices. These are statistics which describe the temporal-spectral distribution of acoustic energy and reflect content of ecological interest. We combine spectral indices to produce false-color spectrogram images. These not only reveal acoustic content but also facilitate navigation. An additional analytic challenge is to find appropriate descriptors to summarize the content of 24-hour recordings, so that it becomes possible to monitor long-term changes in the acoustic environment at a single location and to compare the acoustic environments of different locations. We describe a 24-hour ‘acoustic-fingerprint’ which shows some preliminary promise.
A novel human leucocyte antigen-DRB1 genotyping method based on multiplex primer extension reactions
Resumo:
We have developed and validated a semi-automated fluorescent method of genotyping human leucocyte antigen (HLA)-DRB1 alleles, HLA-DRB1*01-16, by multiplex primer extension reactions. This method is based on the extension of a primer that anneals immediately adjacent to the single-nucleotide polymorphism with fluorescent dideoxynucleotide triphosphates (minisequencing), followed by analysis on an ABI Prism 3700 capillary electrophoresis instrument. The validity of the method was confirmed by genotyping 261 individuals using both this method and polymerase chain reaction with sequence-specific primer (PCR-SSP) or sequencing and by demonstrating Mendelian inheritance of HLA-DRB1 alleles in families. Our method provides a rapid means of performing high-throughput HLA-DRB1 genotyping using only two PCR reactions followed by four multiplex primer extension reactions and PCR-SSP for some allele groups. In this article, we describe the method and discuss its advantages and limitations.
Resumo:
Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.
Resumo:
Frog species have been declining worldwide at unprecedented rates in the past decades. There are many reasons for this decline including pollution, habitat loss, and invasive species [1]. To preserve, protect, and restore frog biodiversity, it is important to monitor and assess frog species. In this paper, a novel method using image processing techniques for analyzing Australian frog vocalisations is proposed. An FFT is applied to audio data to produce a spectrogram. Then, acoustic events are detected and isolated into corresponding segments through image processing techniques applied to the spectrogram. For each segment, spectral peak tracks are extracted with selected seeds and a region growing technique is utilised to obtain the contour of each frog vocalisation. Based on spectral peak tracks and the contour of each frog vocalisation, six feature sets are extracted. Principal component analysis reduces each feature set down to six principal components which are tested for classification performance with a k-nearest neighbor classifier. This experiment tests the proposed method of classification on fourteen frog species which are geographically well distributed throughout Queensland, Australia. The experimental results show that the best average classification accuracy for the fourteen frog species can be up to 87%.
Resumo:
Frogs have received increasing attention due to their effectiveness for indicating the environment change. Therefore, it is important to monitor and assess frogs. With the development of sensor techniques, large volumes of audio data (including frog calls) have been collected and need to be analysed. After transforming the audio data into its spectrogram representation using short-time Fourier transform, the visual inspection of this representation motivates us to use image processing techniques for analysing audio data. Applying acoustic event detection (AED) method to spectrograms, acoustic events are firstly detected from which ridges are extracted. Three feature sets, Mel-frequency cepstral coefficients (MFCCs), AED feature set and ridge feature set, are then used for frog call classification with a support vector machine classifier. Fifteen frog species widely spread in Queensland, Australia, are selected to evaluate the proposed method. The experimental results show that ridge feature set can achieve an average classification accuracy of 74.73% which outperforms the MFCCs (38.99%) and AED feature set (67.78%).
Resumo:
Over past few decades, frog species have been experiencing dramatic decline around the world. The reason for this decline includes habitat loss, invasive species, climate change and so on. To better know the status of frog species, classifying frogs has become increasingly important. In this study, acoustic features are investigated for multi-level classification of Australian frogs: family, genus and species, including three families, eleven genera and eighty five species which are collected from Queensland, Australia. For each frog species, six instances are selected from which ten acoustic features are calculated. Then, the multicollinearity between ten features are studied for selecting non-correlated features for subsequent analysis. A decision tree (DT) classifier is used to visually and explicitly determine which acoustic features are relatively important for classifying family, which for genus, and which for species. Finally, a weighted support vector machines (SVMs) classifier is used for the multi- level classification with three most important acoustic features respectively. Our experiment results indicate that using different acoustic feature sets can successfully classify frogs at different levels and the average classification accuracy can be up to 85.6%, 86.1% and 56.2% for family, genus and species respectively.
Resumo:
Many websites presently provide the facility for users to rate items quality based on user opinion. These ratings are used later to produce item reputation scores. The majority of websites apply the mean method to aggregate user ratings. This method is very simple and is not considered as an accurate aggregator. Many methods have been proposed to make aggregators produce more accurate reputation scores. In the majority of proposed methods the authors use extra information about the rating providers or about the context (e.g. time) in which the rating was given. However, this information is not available all the time. In such cases these methods produce reputation scores using the mean method or other alternative simple methods. In this paper, we propose a novel reputation model that generates more accurate item reputation scores based on collected ratings only. Our proposed model embeds statistical data, previously disregarded, of a given rating dataset in order to enhance the accuracy of the generated reputation scores. In more detail, we use the Beta distribution to produce weights for ratings and aggregate ratings using the weighted mean method. Experiments show that the proposed model exhibits performance superior to that of current state-of-the-art models.
Resumo:
This paper presents a novel three-dimensional hybrid smoothed finite element method (H-SFEM) for solid mechanics problems. In 3D H-SFEM, the strain field is assumed to be the weighted average between compatible strains from the finite element method (FEM) and smoothed strains from the node-based smoothed FEM with a parameter α equipped into H-SFEM. By adjusting α, the upper and lower bound solutions in the strain energy norm and eigenfrequencies can always be obtained. The optimized α value in 3D H-SFEM using a tetrahedron mesh possesses a close-to-exact stiffness of the continuous system, and produces ultra-accurate solutions in terms of displacement, strain energy and eigenfrequencies in the linear and nonlinear problems. The novel domain-based selective scheme is proposed leading to a combined selective H-SFEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The proposed 3D H-SFEM is an innovative and unique numerical method with its distinct features, which has great potential in the successful application for solid mechanics problems.