978 resultados para variational cumulant expansion method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Law is saturated with stories. People tell their stories to lawyers; lawyers tell their client's stories to courts; and legislators develop regulation to respond to their constituent's stories of injustice or inequality. My approach to first-year legal education respects this narrative tradition. Both my curriculum design and assessment scheme in the compulsory first-year subject Australian Legal System deploy narrative methodology as the central teaching and learning device. Throughout the course, students work on resolving the problems of four hypothetical clients. Like a murder mystery, pieces of the puzzle come together as students learn more about legal institutions and the texts they produce, the process of legal research, the analysis and interpretation of primary legal sources, the steps in legal problem-solving, the genre conventions of legal writing style, the practical skills and ethical dimensions of professional practice, and critical inquiry into the normative underpinnings and impacts of the law. The assessment scheme mirrors this design. In their portfolio-based assignment, for example, students devise their own client profile, research the client's legal position and prepare a memorandum of advice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To The ratcheting behavior of high-strength rail steel (Australian Standard AS1085.1) is studied in this work for the purpose of predicting wear and damage to the rail surface. Historically, researchers have used circular test coupons obtained from the rail head to conduct cyclic load tests, but according to hardness profile data, considerable variation exists across the rail head section. For example, the induction-hardened rail (AS1085.1) shows high hardness (400-430 HV100) up to four-millimeters into the rail head’s surface, but then drops considerably beyond that. Given that cyclic test coupons five millimeters in diameter at the gauge area are usually taken from the rail sample, there is a high probability that the original surface properties of the rail do not apply across the entire test coupon and, therefore, data representing only average material properties are obtained. In the literature, disks (47 mm in diameter) for a twin-disk rolling contact test machine have been obtained directly from the rail sample and used to validate rolling contact fatigue wear models. The question arises: How accurate are such predictions? In this research paper, the effect of rail sampling position on the ratcheting behavior of AS1085.1 rail steel was investigated using rectangular shaped specimens. Uniaxial stress-controlled tests were conducted with samples obtained at four different depths to observe the ratcheting behaviour of each. Micro-hardness measurements of the test coupons were carried out to obtain a constitutive relationship to predict the effect of depth on the ratcheting behaviour of the rail material. This work ultimately assists the selection of valid material parameters for constitutive models in the study of rail surface ratcheting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purified proteins are mandatory for molecular, immunological and cellular studies. However, purification of proteins from complex mixtures requires specialised chromatography methods (i.e., gel filtration, ion exchange, etc.) using fast protein liquid chromatography (FPLC) or high-performance liquid chromatography (HPLC) systems. Such systems are expensive and certain proteins require two or more different steps for sufficient purity and generally result in low recovery. The aim of this study was to develop a rapid, inexpensive and efficient gel-electrophoresis-based protein purification method using basic and readily available laboratory equipment. We have used crude rye grass pollen extract to purify the major allergens Lol p 1 and Lol p 5 as the model protein candidates. Total proteins were resolved on large primary gel and Coomassie Brilliant Blue (CBB)-stained Lol p 1/5 allergens were excised and purified on a secondary "mini"-gel. Purified proteins were extracted from unstained separating gels and subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and immunoblot analyses. Silver-stained SDS-PAGE gels resolved pure proteins (i.e., 875 μg of Lol p 1 recovered from a 8 mg crude starting material) while immunoblot analysis confirmed immunological reactivity of the purified proteins. Such a purification method is rapid, inexpensive, and efficient in generating proteins of sufficient purity for use in monoclonal antibody (mAb) production, protein sequencing and general molecular, immunological, and cellular studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Genetic testing is recommended when the probability of a disease-associated germline mutation exceeds 10%. Germline mutations are found in approximately 25% of individuals with phaeochromcytoma (PCC) or paraganglioma (PGL); however, genetic heterogeneity for PCC/PGL means many genes may require sequencing. A phenotype-directed iterative approach may limit costs but may also delay diagnosis, and will not detect mutations in genes not previously associated with PCC/PGL. Objective To assess whether whole exome sequencing (WES) was efficient and sensitive for mutation detection in PCC/PGL. Methods Whole exome sequencing was performed on blinded samples from eleven individuals with PCC/PGL and known mutations. Illumina TruSeq™ (Illumina Inc, San Diego, CA, USA) was used for exome capture of seven samples, and NimbleGen SeqCap EZ v3.0 (Roche NimbleGen Inc, Basel, Switzerland) for five samples (one sample was repeated). Massive parallel sequencing was performed on multiplexed samples. Sequencing data were called using Genome Analysis Toolkit and annotated using annovar. Data were assessed for coding variants in RET, NF1, VHL, SDHD, SDHB, SDHC, SDHA, SDHAF2, KIF1B, TMEM127, EGLN1 and MAX. Target capture of five exome capture platforms was compared. Results Six of seven mutations were detected using Illumina TruSeq™ exome capture. All five mutations were detected using NimbleGen SeqCap EZ v3.0 platform, including the mutation missed using Illumina TruSeq™ capture. Target capture for exons in known PCC/PGL genes differs substantially between platforms. Exome sequencing was inexpensive (<$A800 per sample for reagents) and rapid (results <5 weeks from sample reception). Conclusion Whole exome sequencing is sensitive, rapid and efficient for detection of PCC/PGL germline mutations. However, capture platform selection is critical to maximize sensitivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Disney method is a collaborative creativity technique that uses three roles - dreamer, realist and critic - to facilitate the consideration of different perspectives on a topic. Especially for novices it is important to obtain guidance in applying this method. One way is providing groups with a trained moderator. However, feedback about the group’s behavior might interrupt the flow of the idea finding process. We built and evaluated a system that provides ambient feedback to a group about the distribution of their statements among the three roles. Our preliminary field study indicates that groups supported by the system contribute more and roles are used in a more balanced way while the visualization does not disrupt the group work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sheep (Ovis aries) is favored by many musculoskeletal tissue engineering groups as a large animal model because of its docile temperament and ease of husbandry. The size and weight of sheep are comparable to humans, which allows for the use of implants and fixation devices used in human clinical practice. The construction of a complimentary DNA (cDNA) library can capture the expression of genes in both a tissue- and time-specific manner. cDNA libraries have been a consistent source of gene discovery ever since the technology became commonplace more than three decades ago. Here, we describe the construction of a cDNA library using cells derived from sheep bones based on the pBluescript cDNA kit. Thirty clones were picked at random and sequenced. This led to the identification of a novel gene, C12orf29, which our initial experiments indicate is involved in skeletal biology. We also describe a polymerase chain reaction-based cDNA clone isolation method that allows the isolation of genes of interest from a cDNA library pool. The techniques outlined here can be applied in-house by smaller tissue engineering groups to generate tools for biomolecular research for large preclinical animal studies and highlights the power of standard cDNA library protocols to uncover novel genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Membrane proteins play important roles in many biochemical processes and are also attractive targets of drug discovery for various diseases. The elucidation of membrane protein types provides clues for understanding the structure and function of proteins. Recently we developed a novel system for predicting protein subnuclear localizations. In this paper, we propose a simplified version of our system for predicting membrane protein types directly from primary protein structures, which incorporates amino acid classifications and physicochemical properties into a general form of pseudo-amino acid composition. In this simplified system, we will design a two-stage multi-class support vector machine combined with a two-step optimal feature selection process, which proves very effective in our experiments. The performance of the present method is evaluated on two benchmark datasets consisting of five types of membrane proteins. The overall accuracies of prediction for five types are 93.25% and 96.61% via the jackknife test and independent dataset test, respectively. These results indicate that our method is effective and valuable for predicting membrane protein types. A web server for the proposed method is available at http://www.juemengt.com/jcc/memty_page.php

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To illustrate a new method for simplifying patient recruitment for advanced prostate cancer clinical trials using natural language processing techniques. Background: The identification of eligible participants for clinical trials is a critical factor to increase patient recruitment rates and an important issue for discovery of new treatment interventions. The current practice of identifying eligible participants is highly constrained due to manual processing of disparate sources of unstructured patient data. Informatics-based approaches can simplify the complex task of evaluating patient’s eligibility for clinical trials. We show that an ontology-based approach can address the challenge of matching patients to suitable clinical trials. Methods: The free-text descriptions of clinical trial criteria as well as patient data were analysed. A set of common inclusion and exclusion criteria was identified through consultations with expert clinical trial coordinators. A research prototype was developed using Unstructured Information Management Architecture (UIMA) that identified SNOMED CT concepts in the patient data and clinical trial description. The SNOMED CT concepts model the standard clinical terminology that can be used to represent and evaluate patient’s inclusion/exclusion criteria for the clinical trial. Results: Our experimental research prototype describes a semi-automated method for filtering patient records using common clinical trial criteria. Our method simplified the patient recruitment process. The discussion with clinical trial coordinators showed that the efficiency in patient recruitment process measured in terms of information processing time could be improved by 25%. Conclusion: An UIMA-based approach can resolve complexities in patient recruitment for advanced prostate cancer clinical trials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – Ideally, there is no wear in hydrodynamic lubrication regime. A small amount of wear occurs during start and stop of the machines and the amount of wear is so small that it is difficult to measure with accuracy. Various wear measuring techniques have been used where out-of-roundness was found to be the most reliable method of measuring small wear quantities in journal bearings. This technique was further developed to achieve higher accuracy in measuring small wear quantities. The method proved to be reliable as well as inexpensive. The paper aims to discuss these issues. Design/methodology/approach – In an experimental study, the effect of antiwear additives was studied on journal bearings lubricated with oil containing solid contaminants. The test duration was too long and the wear quantities achieved were too small. To minimise the test duration, short tests of about 90 min duration were conducted and wear was measured recording changes in variety of parameters related to weight, geometry and wear debris. The out-of-roundness was found to be the most effective method. This method was further refined by enlarging the out-of-roundness traces on a photocopier. The method was proved to be reliable and inexpensive. Findings – Study revealed that the most commonly used wear measurement techniques such as weight loss, roughness changes and change in particle count were not adequate for measuring small wear quantities in journal bearings. Out-of-roundness method with some refinements was found to be one of the most reliable methods for measuring small wear quantities in journal bearings working in hydrodynamic lubrication regime. By enlarging the out-of-roundness traces and determining the worn area of the bearing cross-section, weight loss in bearings was calculated, which was repeatable and reliable. Research limitations/implications – This research is a basic in nature where a rudimentary solution has been developed for measuring small wear quantities in rotary devices such as journal bearings. The method requires enlarging traces on a photocopier and determining the shape of the worn area on an out-of-roundness trace on a transparency, which is a simple but a crude method. This may require an automated procedure to determine the weight loss from the out-of-roundness traces directly. This method can be very useful in reducing test duration and measuring wear quantities with higher precision in situations where wear quantities are very small. Practical implications – This research provides a reliable method of measuring wear of circular geometry. The Talyrond equipment used for measuring the change in out-of-roundness due to wear of bearings indicates that this equipment has high potential to be used as a wear measuring device also. Measurement of weight loss from the traces is an enhanced capability of this equipment and this research may lead to the development of a modified version of Talyrond type of equipment for wear measurements in circular machine components. Originality/value – Wear measurement in hydrodynamic bearings requires long duration tests to achieve adequate wear quantities. Out-of-roundness is one of the geometrical parameters that changes with progression of wear in a circular shape components. Thus, out-of-roundness is found to be an effective wear measuring parameter that relates to change in geometry. Method of increasing the sensitivity and enlargement of out-of-roundness traces is original work through which area of worn cross-section can be determined and weight loss can be derived for materials of known density with higher precision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel combined near- and mid-infrared (NIR and MIR) spectroscopic method has been researched and developed for the analysis of complex substances such as the Traditional Chinese Medicine (TCM), Illicium verum Hook. F. (IVHF), and its noxious adulterant, Iuicium lanceolatum A.C. Smith (ILACS). Three types of spectral matrix were submitted for classification with the use of the linear discriminant analysis (LDA) method. The data were pretreated with either the successive projections algorithm (SPA) or the discrete wavelet transform (DWT) method. The SPA method performed somewhat better, principally because it required less spectral features for its pretreatment model. Thus, NIR or MIR matrix as well as the combined NIR/MIR one, were pretreated by the SPA method, and then analysed by LDA. This approach enabled the prediction and classification of the IVHF, ILACS and mixed samples. The MIR spectral data produced somewhat better classification rates than the NIR data. However, the best results were obtained from the combined NIR/MIR data matrix with 95–100% correct classifications for calibration, validation and prediction. Principal component analysis (PCA) of the three types of spectral data supported the results obtained with the LDA classification method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we aim at predicting protein structural classes for low-homology data sets based on predicted secondary structures. We propose a new and simple kernel method, named as SSEAKSVM, to predict protein structural classes. The secondary structures of all protein sequences are obtained by using the tool PSIPRED and then a linear kernel on the basis of secondary structure element alignment scores is constructed for training a support vector machine classifier without parameter adjusting. Our method SSEAKSVM was evaluated on two low-homology datasets 25PDB and 1189 with sequence homology being 25% and 40%, respectively. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies on these two data sets are 86.3% and 84.5%, respectively, which are higher than those obtained by other existing methods. Especially, our method achieves higher accuracies (88.1% and 88.5%) for differentiating the α + β class and the α/β class compared to other methods. This suggests that our method is valuable to predict protein structural classes particularly for low-homology protein sequences. The source code of the method in this paper can be downloaded at http://math.xtu.edu.cn/myphp/math/research/source/SSEAK_source_code.rar.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Up-to-date evidence about levels and trends in disease and injury incidence, prevalence, and years lived with disability (YLDs) is an essential input into global, regional, and national health policies. In the Global Burden of Disease Study 2013 (GBD 2013), we estimated these quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013. Methods Estimates were calculated for disease and injury incidence, prevalence, and YLDs using GBD 2010 methods with some important refi nements. Results for incidence of acute disorders and prevalence of chronic disorders are new additions to the analysis. Key improvements include expansion to the cause and sequelae list, updated systematic reviews, use of detailed injury codes, improvements to the Bayesian meta-regression method (DisMod-MR), and use of severity splits for various causes. An index of data representativeness, showing data availability, was calculated for each cause and impairment during three periods globally and at the country level for 2013. In total, 35 620 distinct sources of data were used and documented to calculated estimates for 301 diseases and injuries and 2337 sequelae. The comorbidity simulation provides estimates for the number of sequelae, concurrently, by individuals by country, year, age, and sex. Disability weights were updated with the addition of new population-based survey data from four countries. Findings Disease and injury were highly prevalent; only a small fraction of individuals had no sequelae. Comorbidity rose substantially with age and in absolute terms from 1990 to 2013. Incidence of acute sequelae were predominantly infectious diseases and short-term injuries, with over 2 billion cases of upper respiratory infections and diarrhoeal disease episodes in 2013, with the notable exception of tooth pain due to permanent caries with more than 200 million incident cases in 2013. Conversely, leading chronic sequelae were largely attributable to non-communicable diseases, with prevalence estimates for asymptomatic permanent caries and tension-type headache of 2∙4 billion and 1∙6 billion, respectively. The distribution of the number of sequelae in populations varied widely across regions, with an expected relation between age and disease prevalence. YLDs for both sexes increased from 537∙6 million in 1990 to 764∙8 million in 2013 due to population growth and ageing, whereas the age-standardised rate decreased little from 114∙87 per 1000 people to 110∙31 per 1000 people between 1990 and 2013. Leading causes of YLDs included low back pain and major depressive disorder among the top ten causes of YLDs in every country. YLD rates per person, by major cause groups, indicated the main drivers of increases were due to musculoskeletal, mental, and substance use disorders, neurological disorders, and chronic respiratory diseases; however HIV/AIDS was a notable driver of increasing YLDs in sub-Saharan Africa. Also, the proportion of disability-adjusted life years due to YLDs increased globally from 21·1% in 1990 to 31·2% in 2013. Interpretation Ageing of the world’s population is leading to a substantial increase in the numbers of individuals with sequelae of diseases and injuries. Rates of YLDs are declining much more slowly than mortality rates. The non-fatal dimensions of disease and injury will require more and more attention from health systems. The transition to nonfatal outcomes as the dominant source of burden of disease is occurring rapidly outside of sub-Saharan Africa. Our results can guide future health initiatives through examination of epidemiological trends and a better understanding of variation across countries.