991 resultados para Inherent chirality
Resumo:
A wide range of peptides produced from milk proteins have been demonstrated to produce a physiological response in model systems. These peptides may be released from intact proteins in the gastrointestinal tract by proteolytic digestion, but are also present in fermented products such as cheese and yogurt, as a result of the action of inherent proteases, such as plasmin, and/or bacterial proteases released by the starter culture. This study investigated the presence of peptides, previously reported to have bioactive properties, in commercially available yogurts and cheeses.
Resumo:
Directed evolution of cytochrome P450 enzymes represents an attractive means of generating novel catalysts for specialized applications. Xenobiotic-metabolizing P450s are particularly well suited to this approach due to their inherent wide substrate specificity. In the present study, a novel method for DNA shuffling was developed using an initial restriction enzyme digestion step, followed by elimination of long parental sequences by size-selective filtration. P450 2C forms were subjected to a single round of shuffling then coexpressed with reductase in E. coli. A sample (54 clones) of the resultant library was assessed for sequence diversity, hemo- and apoprotein expression, and activity towards the substrate indole. All mutants showed a different RFLP pattern compared to all parents, suggesting that the library was free from contamination by parental forms. Haemoprotein expression was detectable in 45/54 (83%) of the mutants sampled. Indigo production was less than or comparable to the activities of one or more of the parental P450s, but three mutants showed indirubin production in excess of that seen with any parental form, representing a gain of function. In conclusion, a method is presented for the effective shuffling of P450 sequences to generate diverse libraries of mutant P450s containing a high proportion of correctly folded hemoprotein, and minimal contamination with parental forms.
Resumo:
Background An increased risk of choking associated with antipsychotic medication has been repeatedly postulated. Aims To examine this association in a large number of cases of choking deaths. Method Cases of individuals who had died because of choking were linked with a case register recording contacts with public mental health services. The actual and expected rates of psychiatric disorder and the presence of psychotropic medication in post-mortem blood samples were compared. Results The 70 people who had choked to death were over 20 times more likely to have been treated previously for schizophrenia. They were also more likely to have had a prior organic psychiatric syndrome. The risk for those receiving thioridazine or lithium was. respectively, 92 times and 30 times greater than expected. Other antipsychotic and psychotropic drugs were not over-represented. Conclusions The increased risk of death in people with schizophrenia may be a combination of inherent predispositions and the use of specific antipsychotic drugs. The increased risk of choking in those with organic psychiatric syndromes is consistent with the consequences of compromised neurological competence. Declaration of interest None.
Resumo:
Attempts to understand why people with adequate communication skills do not always perform well have focused on personality or personal style variables. This research focuses on the situational context and the difficulty inherent in particular encounters. This paper reports two studies concerned with what makes difficult face-to-face communication in work settings difficult or demanding. The first study (Study 1) identifies the types of face-to-face communication encounters that people find difficult to manage in the workplace. Quantitative and qualitative data were gathered to define 41 difficult communication situations representing situations difficult for superiors, colleagues and subordinates, as well as generically difficult situations. In Study 2, quantitative data were analysed using multidimensional scaling techniques to reveal the underlying structure of the situations. Four dimensions were identified: protection/approach, vulnerability, self-management, and involvement/engagement. The results provide insight into the ways in which people construe these types of situations and also provide a taxonomy of difficult communication situations in the workplace. Theoretical and practical implications of the findings are discussed.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
Recent advances in the control of molecular engineering architectures have allowed unprecedented ability of molecular recognition in biosensing, with a promising impact for clinical diagnosis and environment control. The availability of large amounts of data from electrical, optical, or electrochemical measurements requires, however, sophisticated data treatment in order to optimize sensing performance. In this study, we show how an information visualization system based on projections, referred to as Projection Explorer (PEx), can be used to achieve high performance for biosensors made with nanostructured films containing immobilized antigens. As a proof of concept, various visualizations were obtained with impedance spectroscopy data from an array of sensors whose electrical response could be specific toward a given antibody (analyte) owing to molecular recognition processes. In addition to discussing the distinct methods for projection and normalization of the data, we demonstrate that an excellent distinction can be made between real samples tested positive for Chagas disease and Leishmaniasis, which could not be achieved with conventional statistical methods. Such high performance probably arose from the possibility of treating the data in the whole frequency range. Through a systematic analysis, it was inferred that Sammon`s mapping with standardization to normalize the data gives the best results, where distinction could be made of blood serum samples containing 10(-7) mg/mL of the antibody. The method inherent in PEx and the procedures for analyzing the impedance data are entirely generic and can be extended to optimize any type of sensor or biosensor.
Resumo:
Objective To assess how well B-type natriuretic peptide (BNP) predicts prognosis in patients with heart failure. Design Systematic review of studies assessing BNP for prognosis m patients with heart failure or asymptomatic patients. Data sources Electronic searches of Medline and Embase from January 1994 to March 2004 and reference lists of included studies. Study selection and data extraction We included all studies that estimated the relation between BNP measurement and the risk of death, cardiac death, sudden death, or cardiovascular event in patients with heart failure or asymptomatic patients, including initial values and changes in values in response to treatment. Multivariable models that included both BNP and left ventricular ejection fraction as predictors were used to compare the prognostic value of each variable. Two reviewers independently selected studies and extracted data. Data synthesis 19 studies used BNP to estimate the relative risk of death or cardiovascular events in heart failure patients and five studies in asymptomatic patients. In heart failure patients, each 100 pg/ml increase was associated with a 35% increase in the relative risk of death. BNP was used in 35 multivariable models of prognosis. In nine of the models, it was the only variable to reach significance-that is, other variables contained no prognostic information beyond that of BNP. Even allowing for the scale of the variables, it seems to be a strong indicator of risk. Conclusion Although systematic reviews of prognostic studies have inherent difficulties, including die possibility of publication bias, the results of the studies in this review show that BNP is a strong prognostic indicator for both asymptomatic patients mid for patients with heart failure at all stages of disease.
Resumo:
The title compound, C(8)H(14)N(2)O(5)S 2(H(2)O), 2-amino-3-(N-oxipiridin-4-ilsulfanil)-propionic acid dihydrate, is obtained by the reaction of cysteine and 4-nitropyridine N-oxide in dimethylformamide, removing the NO(2) group from the benzene ring and releasing nitrous acid into the solution. The molecule exists as a Zwitterion. Hydrogen bond interactions involving the title molecule and water molecules allow the formation of R(5)(5)(23) edge fused rings parallel to (010). Water molecules are connected independently, forming infinite chains (wires), in square wave form, along the b-axis. The chirality of the cysteine molecule used in the synthesis is retained in the title molecule. A density functional theory (DFT) optimized structure at the B3LYP/6-311G(3df,2p) level allows comparison of calculated and experimental IR spectra.
Resumo:
The synthesis, spectroscopy, and electrochemistry of the acyclic tertiary tetraamine copper(II) complex [CuL(1)](ClO4)(2) (L(1) = N,N-bis(2'-(dimethylamino)ethyl)-N,N'-dimethylpropane-1,3-diamine) is reported. The X-ray crystal structure of [CuL(1)(OClO3)(2)] reveals a tetragonally elongated CuN4O2 coordination sphere, exhibiting relatively long Cu-N bond lengths for a Cu-II tetraamine, and a small tetrahedral distortion of the CuN4 plane. The [CuL(1)](2+) ion displays a single, reversible, one-electron reduction at -0.06 V vs Ag/AgCl. The results presented herein illustrate the inherent difficulties associated with the separation and characterization of Cu-II complexes of tertiary tetraamines, and some previously incorrect assertions and unexplained observations of other workers are discussed.
Resumo:
Computer modelling has shown that electrical characteristics of individual pixels may be extracted from within multiple-frequency electrical impedance tomography (MFEIT) images formed using a reference data set obtained from a purely resistive, homogeneous medium. In some applications it is desirable to extract the electrical characteristics of individual pixels from images where a purely resistive, homogeneous reference data set is not available. One such application of the technique of MFEIT is to allow the acquisition of in vivo images using reference data sets obtained from a non-homogeneous medium with a reactive component. However, the reactive component of the reference data set introduces difficulties with the extraction of the true electrical characteristics from the image pixels. This study was a preliminary investigation of a technique to extract electrical parameters from multifrequency images when the reference data set has a reactive component. Unlike the situation in which a homogenous, resistive data set is available, it is not possible to obtain the impedance and phase information directly from the image pixel values of the MFEIT images data set, as the phase of the reactive reference is not known. The method reported here to extract the electrical characteristics (the Cole-Cole plot) initially assumes that this phase angle is zero. With this assumption, an impedance spectrum can be directly extracted from the image set. To obtain the true Cole-Cole plot a correction must be applied to account for the inherent rotation of the extracted impedance spectrum about the origin, which is a result of the assumption. This work shows that the angle of rotation associated with the reactive component of the reference data set may be determined using a priori knowledge of the distribution of frequencies of the Cole-Cole plot. Using this angle of rotation, the true Cole-Cole plot can be obtained from the impedance spectrum extracted from the MFEIT image data set. The method was investigated using simulated data, both with and without noise, and also for image data obtained in vitro. The in vitro studies involved 32 logarithmically spaced frequencies from 4 kHz up to 1 MHz and demonstrated that differences between the true characteristics and those of the impedance spectrum were reduced significantly after application of the correction technique. The differences between the extracted parameters and the true values prior to correction were in the range from 16% to 70%. Following application of the correction technique the differences were reduced to less than 5%. The parameters obtained from the Cole-Cole plot may be useful as a characterization of the nature and health of the imaged tissues.
Resumo:
Protein purification that combines the use of molecular mass exclusion membranes with electrophoresis is particularly powerful as it uses properties inherent to both techniques. The use of membranes allows efficient processing and is easily scaled up, while electrophoresis permits high resolution separation under mild conditions. The Gradiflow apparatus combines these two technologies as it uses polyacrylamide membranes to influence electrokinetic separations. The reflux electrophoresis process consists of a series of cycles incorporating a forward phase and a reverse phase. The forward phase involves collection of a target protein that passes through a separation membrane before trailing proteins in the same solution. The forward phase is repeated following clearance of the membrane in the reverse phase by reversing the current. We have devised a strategy to establish optimal reflux separation parameters, where membranes are chosen for a particular operating range and protein transfer is monitored at different pH values. In addition, forward and reverse phase times are determined during this process. Two examples of the reflux method are described. In the first case, we describe the purification strategy for proteins from a complex mixture which contains proteins of higher electrophoretic mobility than the target protein. This is a two-step procedure, where first proteins of higher mobility than the target protein are removed from the solution by a series of reflux cycles, so that the target protein remains as the leading fraction. In the second step the target protein is collected, as it has become the leading fraction of the remaining proteins. In the second example we report the development of a reflux strategy which allowed a rapid one-step preparative purification of a recombinant protein, expressed in Dictyostelium discoideum. These strategies demonstrate that the Gradiflow is amenable to a wide range of applications, as the protein of interest is not necessarily required to be the leading fraction in solution. (C) 1997 Elsevier Science B.V.
Resumo:
Radiation dose calculations in nuclear medicine depend on quantification of activity via planar and/or tomographic imaging methods. However, both methods have inherent limitations, and the accuracy of activity estimates varies with object size, background levels, and other variables. The goal of this study was to evaluate the limitations of quantitative imaging with planar and single photon emission computed tomography (SPECT) approaches, with a focus on activity quantification for use in calculating absorbed dose estimates for normal organs and tumors. To do this we studied a series of phantoms of varying complexity of geometry, with three radionuclides whose decay schemes varied from simple to complex. Four aqueous concentrations of (99m)Tc, (131)I, and (111)In (74, 185, 370, and 740 kBq mL(-1)) were placed in spheres of four different sizes in a water-filled phantom, with three different levels of activity in the surrounding water. Planar and SPECT images of the phantoms were obtained on a modern SPECT/computed tomography (CT) system. These radionuclides and concentration/background studies were repeated using a cardiac phantom and a modified torso phantom with liver and ""tumor"" regions containing the radionuclide concentrations and with the same varying background levels. Planar quantification was performed using the geometric mean approach, with attenuation correction (AC), and with and without scatter corrections (SC and NSC). SPECT images were reconstructed using attenuation maps (AM) for AC; scatter windows were used to perform SC during image reconstruction. For spherical sources with corrected data, good accuracy was observed (generally within +/- 10% of known values) for the largest sphere (11.5 mL) and for both planar and SPECT methods with (99m)Tc and (131)I, but were poorest and deviated from known values for smaller objects, most notably for (111)In. SPECT quantification was affected by the partial volume effect in smaller objects and generally showed larger errors than the planar results in these cases for all radionuclides. For the cardiac phantom, results were the most accurate of all of the experiments for all radionuclides. Background subtraction was an important factor influencing these results. The contribution of scattered photons was important in quantification with (131)I; if scatter was not accounted for, activity tended to be overestimated using planar quantification methods. For the torso phantom experiments, results show a clear underestimation of activity when compared to previous experiment with spherical sources for all radionuclides. Despite some variations that were observed as the level of background increased, the SPECT results were more consistent across different activity concentrations. Planar or SPECT quantification on state-of-the-art gamma cameras with appropriate quantitative processing can provide accuracies of better than 10% for large objects and modest target-to-background concentrations; however when smaller objects are used, in the presence of higher background, and for nuclides with more complex decay schemes, SPECT quantification methods generally produce better results. Health Phys. 99(5):688-701; 2010
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Recent studies have demonstrated that spatial patterns of fMRI BOLD activity distribution over the brain may be used to classify different groups or mental states. These studies are based on the application of advanced pattern recognition approaches and multivariate statistical classifiers. Most published articles in this field are focused on improving the accuracy rates and many approaches have been proposed to accomplish this task. Nevertheless, a point inherent to most machine learning methods (and still relatively unexplored in neuroimaging) is how the discriminative information can be used to characterize groups and their differences. In this work, we introduce the Maximum Uncertainty Linear Discrimination Analysis (MLDA) and show how it can be applied to infer groups` patterns by discriminant hyperplane navigation. In addition, we show that it naturally defines a behavioral score, i.e., an index quantifying the distance between the states of a subject from predefined groups. We validate and illustrate this approach using a motor block design fMRI experiment data with 35 subjects. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Although many mathematical models exist predicting the dynamics of transposable elements (TEs), there is a lack of available empirical data to validate these models and inherent assumptions. Genomes can provide a snapshot of several TE families in a single organism, and these could have their demographics inferred by coalescent analysis, allowing for the testing of theories on TE amplification dynamics. Using the available genomes of the mosquitoes Aedes aegypti and Anopheles gambiae, we indicate that such an approach is feasible. Our analysis follows four steps: (1) mining the two mosquito genomes currently available in search of TE families; (2) fitting, to selected families found in (1), a phylogeny tree under the general time-reversible (GTR) nucleotide substitution model with an uncorrelated lognormal (UCLN) relaxed clock and a nonparametric demographic model; (3) fitting a nonparametric coalescent model to the tree generated in (2); and (4) fitting parametric models motivated by ecological theories to the curve generated in (3).