926 resultados para complex polymerization method
Resumo:
The PM3 semiempirical quantum-mechanical method was found to systematically describe intermolecular hydrogen bonding in small polar molecules. PM3 shows charge transfer from the donor to acceptor molecules on the order of 0.02-0.06 units of charge when strong hydrogen bonds are formed. The PM3 method is predictive; calculated hydrogen bond energies with an absolute magnitude greater than 2 kcal mol-' suggest that the global minimum is a hydrogen bonded complex; absolute energies less than 2 kcal mol-' imply that other van der Waals complexes are more stable. The geometries of the PM3 hydrogen bonded complexes agree with high-resolution spectroscopic observations, gas electron diffraction data, and high-level ab initio calculations. The main limitations in the PM3 method are the underestimation of hydrogen bond lengths by 0.1-0.2 for some systems and the underestimation of reliable experimental hydrogen bond energies by approximately 1-2 kcal mol-l. The PM3 method predicts that ammonia is a good hydrogen bond acceptor and a poor hydrogen donor when interacting with neutral molecules. Electronegativity differences between F, N, and 0 predict that donor strength follows the order F > 0 > N and acceptor strength follows the order N > 0 > F. In the calculations presented in this article, the PM3 method mirrors these electronegativity differences, predicting the F-H- - -N bond to be the strongest and the N-H- - -F bond the weakest. It appears that the PM3 Hamiltonian is able to model hydrogen bonding because of the reduction of two-center repulsive forces brought about by the parameterization of the Gaussian core-core interactions. The ability of the PM3 method to model intermolecular hydrogen bonding means reasonably accurate quantum-mechanical calculations can be applied to small biologic systems.
Resumo:
Qualitative assessment of spontaneous motor activity in early infancy is widely used in clinical practice. It enables the description of maturational changes of motor behavior in both healthy infants and infants who are at risk for later neurological impairment. These assessments are, however, time-consuming and are dependent upon professional experience. Therefore, a simple physiological method that describes the complex behavior of spontaneous movements (SMs) in infants would be helpful. In this methodological study, we aimed to determine whether time series of motor acceleration measurements at 40-44 weeks and 50-55 weeks gestational age in healthy infants exhibit fractal-like properties and if this self-affinity of the acceleration signal is sensitive to maturation. Healthy motor state was ensured by General Movement assessment. We assessed statistical persistence in the acceleration time series by calculating the scaling exponent α via detrended fluctuation analysis of the time series. In hand trajectories of SMs in infants we found a mean α value of 1.198 (95 % CI 1.167-1.230) at 40-44 weeks. Alpha changed significantly (p = 0.001) at 50-55 weeks to a mean of 1.102 (1.055-1.149). Complementary multilevel regression analysis confirmed a decreasing trend of α with increasing age. Statistical persistence of fluctuation in hand trajectories of SMs is sensitive to neurological maturation and can be characterized by a simple parameter α in an automated and observer-independent fashion. Future studies including children at risk for neurological impairment should evaluate whether this method could be used as an early clinical screening tool for later neurological compromise.
Resumo:
Signal proteins are able to adapt their response to a change in the environment, governing in this way a broad variety of important cellular processes in living systems. While conventional molecular-dynamics (MD) techniques can be used to explore the early signaling pathway of these protein systems at atomistic resolution, the high computational costs limit their usefulness for the elucidation of the multiscale transduction dynamics of most signaling processes, occurring on experimental timescales. To cope with the problem, we present in this paper a novel multiscale-modeling method, based on a combination of the kinetic Monte-Carlo- and MD-technique, and demonstrate its suitability for investigating the signaling behavior of the photoswitch light-oxygen-voltage-2-Jα domain from Avena Sativa (AsLOV2-Jα) and an AsLOV2-Jα-regulated photoactivable Rac1-GTPase (PA-Rac1), recently employed to control the motility of cancer cells through light stimulus. More specifically, we show that their signaling pathways begin with a residual re-arrangement and subsequent H-bond formation of amino acids near to the flavin-mononucleotide chromophore, causing a coupling between β-strands and subsequent detachment of a peripheral α-helix from the AsLOV2-domain. In the case of the PA-Rac1 system we find that this latter process induces the release of the AsLOV2-inhibitor from the switchII-activation site of the GTPase, enabling signal activation through effector-protein binding. These applications demonstrate that our approach reliably reproduces the signaling pathways of complex signal proteins, ranging from nanoseconds up to seconds at affordable computational costs.
Resumo:
The Pacaya volcanic complex is part of the Central American volcanic arc, which is associated with the subduction of the Cocos tectonic plate under the Caribbean plate. Located 30 km south of Guatemala City, Pacaya is situated on the southern rim of the Amatitlan Caldera. It is the largest post-caldera volcano, and has been one of Central America’s most active volcanoes over the last 500 years. Between 400 and 2000 years B.P, the Pacaya volcano had experienced a huge collapse, which resulted in the formation of horseshoe-shaped scarp that is still visible. In the recent years, several smaller collapses have been associated with the activity of the volcano (in 1961 and 2010) affecting its northwestern flanks, which are likely to be induced by the local and regional stress changes. The similar orientation of dry and volcanic fissures and the distribution of new vents would likely explain the reactivation of the pre-existing stress configuration responsible for the old-collapse. This paper presents the first stability analysis of the Pacaya volcanic flank. The inputs for the geological and geotechnical models were defined based on the stratigraphical, lithological, structural data, and material properties obtained from field survey and lab tests. According to the mechanical characteristics, three lithotechnical units were defined: Lava, Lava-Breccia and Breccia-Lava. The Hoek and Brown’s failure criterion was applied for each lithotechnical unit and the rock mass friction angle, apparent cohesion, and strength and deformation characteristics were computed in a specified stress range. Further, the stability of the volcano was evaluated by two-dimensional analysis performed by Limit Equilibrium (LEM, ROCSCIENCE) and Finite Element Method (FEM, PHASE 2 7.0). The stability analysis mainly focused on the modern Pacaya volcano built inside the collapse amphitheatre of “Old Pacaya”. The volcanic instability was assessed based on the variability of safety factor using deterministic, sensitivity, and probabilistic analysis considering the gravitational instability and the effects of external forces such as magma pressure and seismicity as potential triggering mechanisms of lateral collapse. The preliminary results from the analysis provide two insights: first, the least stable sector is on the south-western flank of the volcano; second, the lowest safety factor value suggests that the edifice is stable under gravity alone, and the external triggering mechanism can represent a likely destabilizing factor.
Resumo:
Quantifying belowground dynamics is critical to our understanding of plant and ecosystem function and belowground carbon cycling, yet currently available tools for complex belowground image analyses are insufficient. We introduce novel techniques combining digital image processing tools and geographic information systems (GIS) analysis to permit semi-automated analysis of complex root and soil dynamics. We illustrate methodologies with imagery from microcosms, minirhizotrons, and a rhizotron, in upland and peatland soils. We provide guidelines for correct image capture, a method that automatically stitches together numerous minirhizotron images into one seamless image, and image analysis using image segmentation and classification in SPRING or change analysis in ArcMap. These methods facilitate spatial and temporal root and soil interaction studies, providing a framework to expand a more comprehensive understanding of belowground dynamics.
Resumo:
Large quantities of pure synthetic oligodeoxynucleotides (ODNs) are important for preclinical research, drug development, and biological studies. These ODNs are synthesized on an automated synthesizer. It is inevitable that the crude ODN product contains failure sequences which are not easily removed because they have the same properties as the full length ODNs. Current ODN purification methods such as polyacrylamide gel electrophoresis (PAGE), reversed-phase high performance liquid chromatography (RP HPLC), anion exchange HPLC, and affinity purification can remove those impurities. However, they are not suitable for large scale purification due to the expensive aspects associated with instrumentation, solvent demand, and high labor costs. To solve these problems, two non-chromatographic ODN purification methods have been developed. In the first method, the full-length ODN was tagged with the phosphoramidite containing a methacrylamide group and a cleavable linker while the failure sequences were not. The full-length ODN was incorporated into a polymer through radical acrylamide polymerization whereas failure sequences and other impurities were removed by washing. Pure full-length ODN was obtained by cleaving it from the polymer. In the second method, the failure sequences were capped by a methacrylated phosphoramidite in each synthetic cycle. During purification, the failure sequences were separated from the full-length ODN by radical acrylamide polymerization. The full-length ODN was obtained via water extraction. For both methods, excellent purification yields were achieved and the purity of ODNs was very satisfactory. Thus, this new technology is expected to be beneficial for large scale ODN purification.
Resumo:
The factors that influence the choice of a method for treatment of an ore comprise the technical and economic limitations and advantages, derived in detail and balanced according to the exigencies of the particular situation.
Resumo:
In autumn 2007 the Swiss Medical School of Berne (Switzerland) implemented mandatory short-term clerkships in primary health care for all undergraduate medical students. Students studying for a Bachelor degree complete 8 half-days per year in the office of a general practitioner, while students studying for a Masters complete a three-week clerkship. Every student completes his clerkships in the same GP office during his four years of study. The purpose of this paper is to show how the goals and learning objectives were developed and evaluated. Method:A working group of general practitioners and faculty had the task of defining goals and learning objectives for a specific training program within the complex context of primary health care. The group based its work on various national and international publications. An evaluation of the program, a list of minimum requirements for the clerkships, an oral exam in the first year and an OSCE assignment in the third year assessed achievement of the learning objectives. Results: The findings present the goals and principal learning objectives for these clerkships, the results of the evaluation and the achievement of minimum requirements. Most of the defined learning objectives were taught and duly learned by students. Some learning objectives proved to be incompatible in the context of ambulatory primary care and had to be adjusted accordingly. Discussion: The learning objectives were evaluated and adapted to address students’ and teachers’ needs and the requirements of the medical school. The achievement of minimum requirements (and hence of the learning objectives) for clerkships has been mandatory since 2008. Further evaluations will show whether additional learning objectives need to be adopte
Resumo:
Decentralised controls offer advantages for the implementation as well as the operation of controls of steady conveyors. Such concepts are mainly based on RFID. Due to the reduced expense for appliances and software, however, the plant behaviour cannot be determined as accurately as in centrally controlled systems. This article describes a simulation-based method by which the performances of these two control concepts can easily be evaluated in order to determine the suitability of the decentralised concept.
Resumo:
Inhibitory antibodies directed against coagulation factor VIII (FVIII) can be found in patients with acquired and congenital hemophilia A. Such FVIII-inhibiting antibodies are routinely detected by the functional Bethesda Assay. However, this assay has a low sensitivity and shows a high inter-laboratory variability. Another method to detect antibodies recognizing FVIII is ELISA, but this test does not allow the distinction between inhibitory and non-inhibitory antibodies. Therefore, we aimed at replacing the intricate antigen FVIII by Designed Ankyrin Repeat Proteins (DARPins) mimicking the epitopes of FVIII inhibitors. As a model we used the well-described inhibitory human monoclonal anti-FVIII antibody, Bo2C11, for the selection on DARPin libraries. Two DARPins were selected binding to the antigen-binding site of Bo2C11, which mimic thus a functional epitope on FVIII. These DARPins inhibited the binding of the antibody to its antigen and restored FVIII activity as determined in the Bethesda assay. Furthermore, the specific DARPins were able to recognize the target antibody in human plasma and could therefore be used to test for the presence of Bo2C11-like antibodies in a large set of hemophilia A patients. These data suggest, that our approach might be used to isolate epitopes from different sets of anti-FVIII antibodies in order to develop an ELISA-based screening assay allowing the distinction of inhibitory and non-inhibitory anti-FVIII antibodies according to their antibody signatures.
Resumo:
Affinity retardation chromatography (ARC), a method for the examination of low-affinity interactions, is mathematically described in order to characterize the method itself and to estimate binding coefficients of self-assembly domains of basement membrane protein laminin. Affinity retardation was determined by comparing the elutions on a "binding" and on a "nonreacting" column. It depends on the binding coefficient, the concentrations of both ligands, and the nonbinding elution position. Half maximal binding of the NH2-terminal domain of laminin B1-short arm to the A- and/or B2-short arms was estimated to occur at 10-17 microM for noncooperative and at < or = 3 microM for cooperative binding. A model of the laminin polymerization, postulating two levels of cooperative binding behavior, is described.
Resumo:
Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.
Resumo:
Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.
Resumo:
Proximity-dependent trans-biotinylation by the Escherichia coli biotin ligase BirA mutant R118G (BirA*) allows stringent streptavidin affinity purification of proximal proteins. This so-called BioID method provides an alternative to the widely used co-immunoprecipitation (co-IP) to identify protein-protein interactions. Here, we used BioID, on its own and combined with co-IP, to identify proteins involved in nonsense-mediated mRNA decay (NMD), a post-transcriptional mRNA turnover pathway that targets mRNAs that fail to terminate translation properly. In particular, we expressed BirA* fused to the well characterized NMD factors UPF1, UPF2 and SMG5 and detected by liquid chromatography-coupled tandem mass spectrometry (LC-MS/MS) the streptavidin-purified biotinylated proteins. While the identified already known interactors confirmed the usefulness of BioID, we also found new potentially important interactors that have escaped previous detection by co-IP, presumably because they associate only weakly and/or very transiently with the NMD machinery. Our results suggest that SMG5 only transiently contacts the UPF1-UPF2-UPF3 complex and that it provides a physical link to the decapping complex. In addition, BioID revealed among others CRKL and EIF4A2 as putative novel transient interactors with NMD factors, but whether or not they have a function in NMD remains to be elucidated.
Resumo:
BACKGROUND Limitations in the primary studies constitute one important factor to be considered in the grading of recommendations assessment, development, and evaluation (GRADE) system of rating quality of evidence. However, in the network meta-analysis (NMA), such evaluation poses a special challenge because each network estimate receives different amounts of contributions from various studies via direct as well as indirect routes and because some biases have directions whose repercussion in the network can be complicated. FINDINGS In this report we use the NMA of maintenance pharmacotherapy of bipolar disorder (17 interventions, 33 studies) and demonstrate how to quantitatively evaluate the impact of study limitations using netweight, a STATA command for NMA. For each network estimate, the percentage of contributions from direct comparisons at high, moderate or low risk of bias were quantified, respectively. This method has proven flexible enough to accommodate complex biases with direction, such as the one due to the enrichment design seen in some trials of bipolar maintenance pharmacotherapy. CONCLUSIONS Using netweight, therefore, we can evaluate in a transparent and quantitative manner how study limitations of individual studies in the NMA impact on the quality of evidence of each network estimate, even when such limitations have clear directions.