814 resultados para Computer files
Resumo:
Symptoms resembling giant calyx, a graft-transmissible disease, were observed on 1-5% of eggplant (aubergine; Solanum melongena L.) plants in production fields in Sao Paulo state, Brazil. Phytoplasmas were detected in 1 2 of 1 2 samples from symptomatic plants that were analysed by a nested PCR assay employing 16S rRNA gene primers R16mF2/R16mR1 followed by R16F2n/R16R2. RFLP analysis of the resulting rRNA gene products (1.2 kb) indicated that all plants contained similar phytoplasmas, each closely resembling strains previously classified as members of RFLP group 16SrIII (X-disease group). Virtual RFLP and phylogenetic analyses of sequences derived from PCR products identified phytoplasmas infecting eggplant crops grown in Piracicaba as a lineage of the subgroup 16SrIII-J, whereas phytoplasmas detected in plants grown in Braganca Paulista were tentatively classified as members of a novel subgroup 16SrIII-U. These findings confirm eggplant as a new host of group 16SrIII-J phytoplasmas and extend the known diversity of strains belonging to this group in Brazil.
Resumo:
In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The aim of this work is to present a simple, practical and efficient protocol for drug design, in particular Diabetes, which includes selection of the illness, good choice of a target as well as a bioactive ligand and then usage of various computer aided drug design and medicinal chemistry tools to design novel potential drug candidates in different diseases. We have selected the validated target dipeptidyl peptidase IV (DPP-IV), whose inhibition contributes to reduce glucose levels in type 2 diabetes patients. The most active inhibitor with complex X-ray structure reported was initially extracted from the BindingDB database. By using molecular modification strategies widely used in medicinal chemistry, besides current state-of-the-art tools in drug design (including flexible docking, virtual screening, molecular interaction fields, molecular dynamics. ADME and toxicity predictions), we have proposed 4 novel potential DPP-IV inhibitors with drug properties for Diabetes control, which have been supported and validated by all the computational tools used herewith.
Resumo:
Phospholipases A(2) (PLA(2)) are enzymes commonly found in snake venoms from Viperidae and Elaphidae families, which are major components thereof. Many plants are used in traditional medicine its active agents against various effects induced by snakebite. This article presents the PLA(2) BthTX-I structure prediction based on homology modeling. In addition, we have performed virtual screening in a large database yielding a set of potential bioactive inhibitors. A flexible docking program was used to investigate the interactions between the receptor and the new ligands. We have performed molecular interaction fields (MIFs) calculations with the phospholipase model. Results confirm the important role of Lys49 for binding ligands and suggest three additional residues as well. We have proposed a theoretically nontoxic, drug-like, and potential novel BthTX-I inhibitor. These calculations have been used to guide the design of novel phospholipase inhibitors as potential lead compounds that may be optimized for future treatment of snakebite victims as well as other human diseases in which PLA(2) enzymes are involved.
Resumo:
We have used various computational methodologies including molecular dynamics, density functional theory, virtual screening, ADMET predictions and molecular interaction field studies to design and analyze four novel potential inhibitors of farnesyltransferase (FTase). Evaluation of two proposals regarding their drug potential as well as lead compounds have indicated them as novel promising FTase inhibitors, with theoretically interesting pharmacotherapeutic profiles, when Compared to the very active and most cited FTase inhibitors that have activity data reported, which are launched drugs or compounds in clinical tests. One of our two proposals appears to be a more promising drug candidate and FTase inhibitor, but both derivative molecules indicate potentially very good pharmacotherapeutic profiles in comparison with Tipifarnib and Lonafarnib, two reference pharmaceuticals. Two other proposals have been selected with virtual screening approaches and investigated by LIS, which suggest novel and alternatives scaffolds to design future potential FTase inhibitors. Such compounds can be explored as promising molecules to initiate a research protocol in order to discover novel anticancer drug candidates targeting farnesyltransferase, in the fight against cancer. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The second edition of An Introduction to Efficiency and Productivity Analysis is designed to be a general introduction for those who wish to study efficiency and productivity analysis. The book provides an accessible, well-written introduction to the four principal methods involved: econometric estimation of average response models; index numbers, data envelopment analysis (DEA); and stochastic frontier analysis (SFA). For each method, a detailed introduction to the basic concepts is presented, numerical examples are provided, and some of the more important extensions to the basic methods are discussed. Of special interest is the systematic use of detailed empirical applications using real-world data throughout the book. In recent years, there have been a number of excellent advance-level books published on performance measurement. This book, however, is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners. Indeed, the 2nd Edition maintains its uniqueness: (1) It is a well-written introduction to the field. (2) It outlines, discusses and compares the four principal methods for efficiency and productivity analysis in a well-motivated presentation. (3) It provides detailed advice on computer programs that can be used to implement these performance measurement methods. The book contains computer instructions and output listings for the SHAZAM, LIMDEP, TFPIP, DEAP and FRONTIER computer programs. More extensive listings of data and computer instruction files are available on the book's website: (www.uq.edu.au/economics/cepa/crob2005).
Resumo:
Objective: The objectives were to determine the postural consequences of varying computer monitor height and to describe self-selected monitor heights and postures. Design: The design involved experimental manipulation of computer monitor height, description of self-selected heights, and measurement of posture and gaze angles. Background. Disagreement exists with regard to the appropriate height of computer monitors. It is known that users alter both head orientation and gaze angle in response to changes in monitor height; however the relative contribution of atlanto-occipital and cervical flexion to the change in head rotation is unknown. No information is available with regard to self-selected monitor heights. Methods. Twelve students performed a tracking task with the monitor placed at three different heights. The subjects then completed eight trials in which monitor height was first self-selected. Sagittal postural and gaze angle data were determined by digitizing markers defining a two-dimensional three-link model of the trunk, cervical spine and head. Results. The 27 degrees change in monitor height imposed was, on average, accommodated by 18 degrees of head inclination and a 9 degrees change in gaze angle relative to the head. The change in head inclination was achieved by a 6 degrees change in trunk inclination, a 4 degrees change in cervical flexion, and a 7 degrees change in atlanto-occipital flexion. The self-selected height varied depending on the initial monitor height and inclination. Conclusions. Self-selected monitor heights were lower than current 'eye-level' recommendations. Lower monitor heights are likely to reduce both visual and musculoskeletal discomfort. Relevance Musculoskeletal and visual discomfort may be reduced by placing computer monitors lower than currently recommended. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
RWMODEL II simulates the Rescorla-Wagner model of Pavlovian conditioning. It is written in Delphi and runs under Windows 3.1 and Windows 95. The program was designed for novice and expert users and can be employed in teaching, as well as in research. It is user friendly and requires a minimal level of computer literacy but is sufficiently flexible to permit a wide range of simulations. It allows the display of empirical data, against which predictions from the model can be validated.
Resumo:
The influence of initial perturbation geometry and material propel-ties on final fold geometry has been investigated using finite-difference (FLAC) and finite-element (MARC) numerical models. Previous studies using these two different codes reported very different folding behaviour although the material properties, boundary conditions and initial perturbation geometries were similar. The current results establish that the discrepancy was not due to the different computer codes but due to the different strain rates employed in the two previous studies (i.e. 10(-6) s(-1) in the FLAC models and 10(-14) s(-1) in the MARC models). As a result, different parts of the elasto-viscous rheological field were bring investigated. For the same material properties, strain rate and boundary conditions, the present results using the two different codes are consistent. A transition in Folding behaviour, from a situation where the geometry of initial perturbation determines final fold shape to a situation where material properties control the final geometry, is produced using both models. This transition takes place with increasing strain rate, decreasing elastic moduli or increasing viscosity (reflecting in each case the increasing influence of the elastic component in the Maxwell elastoviscous rheology). The transition described here is mechanically feasible but is associated with very high stresses in the competent layer (on the order of GPa), which is improbable under natural conditions. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
The finite element method is used to simulate coupled problems, which describe the related physical and chemical processes of ore body formation and mineralization, in geological and geochemical systems. The main purpose of this paper is to illustrate some simulation results for different types of modelling problems in pore-fluid saturated rock masses. The aims of the simulation results presented in this paper are: (1) getting a better understanding of the processes and mechanisms of ore body formation and mineralization in the upper crust of the Earth; (2) demonstrating the usefulness and applicability of the finite element method in dealing with a wide range of coupled problems in geological and geochemical systems; (3) qualitatively establishing a set of showcase problems, against which any numerical method and computer package can be reasonably validated. (C) 2002 Published by Elsevier Science B.V.
Resumo:
Some patients are no longer able to communicate effectively or even interact with the outside world in ways that most of us take for granted. In the most severe cases, tetraplegic or post-stroke patients are literally `locked in` their bodies, unable to exert any motor control after, for example, a spinal cord injury or a brainstem stroke, requiring alternative methods of communication and control. But we suggest that, in the near future, their brains may offer them a way out. Non-invasive electroencephalogram (EEG)-based brain-computer interfaces (BCD can be characterized by the technique used to measure brain activity and by the way that different brain signals are translated into commands that control an effector (e.g., controlling a computer cursor for word processing and accessing the internet). This review focuses on the basic concepts of EEG-based BC!, the main advances in communication, motor control restoration and the down-regulation of cortical activity, and the mirror neuron system (MNS) in the context of BCI. The latter appears to be relevant for clinical applications in the coming years, particularly for severely limited patients. Hypothetically, MNS could provide a robust way to map neural activity to behavior, representing the high-level information about goals and intentions of these patients. Non-invasive EEG-based BCIs allow brain-derived communication in patients with amyotrophic lateral sclerosis and motor control restoration in patients after spinal cord injury and stroke. Epilepsy and attention deficit and hyperactive disorder patients were able to down-regulate their cortical activity. Given the rapid progression of EEG-based BCI research over the last few years and the swift ascent of computer processing speeds and signal analysis techniques, we suggest that emerging ideas (e.g., MNS in the context of BC!) related to clinical neuro-rehabilitation of severely limited patients will generate viable clinical applications in the near future.
Resumo:
Objectives: Lung hyperinflation may be assessed by computed tomography (CT). As shown for patients with emphysema, however, CT image reconstruction affects quantification of hyperinflation. We studied the impact of reconstruction parameters on hyperinflation measurements in mechanically ventilated (MV) patients. Design: Observational analysis. Setting: A University hospital-affiliated research Unit. Patients: The patients were MV patients with injured (n = 5) or normal lungs (n = 6), and spontaneously breathing patients (n = 5). Interventions: None. Measurements and results: Eight image series involving 3, 5, 7, and 10 mm slices and standard and sharp filters were reconstructed from identical CT raw data. Hyperinflated (V-hyper), normally (V-normal), poorly (V-poor), and nonaerated (V-non) volumes were calculated by densitometry as percentage of total lung volume (V-total). V-hyper obtained with the sharp filter systematically exceeded that with the standard filter showing a median (interquartile range) increment of 138 (62-272) ml corresponding to approximately 4% of V-total. In contrast, sharp filtering minimally affected the other subvolumes (V-normal, V-poor, V-non, and V-total). Decreasing slice thickness also increased V-hyper significantly. When changing from 10 to 3 mm thickness, V-hyper increased by a median value of 107 (49-252) ml in parallel with a small and inconsistent increment in V-non of 12 (7-16) ml. Conclusions: Reconstruction parameters significantly affect quantitative CT assessment of V-hyper in MV patients. Our observations suggest that sharp filters are inappropriate for this purpose. Thin slices combined with standard filters and more appropriate thresholds (e.g., -950 HU in normal lungs) might improve the detection of V-hyper. Different studies on V-hyper can only be compared if identical reconstruction parameters were used.
Resumo:
Little consensus exists in the literature regarding methods for determination of the onset of electromyographic (EMG) activity. The aim of this study was to compare the relative accuracy of a range of computer-based techniques with respect to EMG onset determined visually by an experienced examiner. Twenty-seven methods were compared which varied in terms of EMG processing (low pass filtering at 10, 50 and 500 Hz), threshold value (1, 2 and 3 SD beyond mean of baseline activity) and the number of samples for which the mean must exceed the defined threshold (20, 50 and 100 ms). Three hundred randomly selected trials of a postural task were evaluated using each technique. The visual determination of EMG onset was found to be highly repeatable between days. Linear regression equations were calculated for the values selected by each computer method which indicated that the onset values selected by the majority of the parameter combinations deviated significantly from the visually derived onset values. Several methods accurately selected the time of onset of EMG activity and are recommended for future use. Copyright (C) 1996 Elsevier Science Ireland Ltd.